首页 > 最新文献

2009 International Conference on High Performance Switching and Routing最新文献

英文 中文
A novel traffic engineering method using on-chip diorama network on dynamically reconfigurable processor DAPDNA-2 基于动态可重构处理器DAPDNA-2的片上立体网络流量工程新方法
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307432
Shan Gao, T. Kihara, S. Shimizu, Y. Arakawa, N. Yamanaka, Akifumi Watanabey
This paper proposes a novel traffic engineering method using on-chip diorama network that consists of virtual nodes and virtual links. The diorama network is implemented on reconfigurable processor DAPDNA-2. In these years, traffic engineering has widely researched to guarantee QoS (Quality of Service). The proposal is an experimental solution with the onchip diorama network, where virtual links and virtual nodes are constructed by some PEs (processing elements). We obtain the realistic traffic fluctuation through the behavior of virtual packets exchanged on the on-chip diorama network. In this paper, as first trial to achieve our final goal, we implemented diorama network and confirmed basic path calculation, where both functions are an essential function of our algorithm. The diorama network traffic engineering can realize more sophisticated network design like adaptive traffic balancing or multi-metric design.
提出了一种基于虚拟节点和虚拟链路的片上立体网络的流量工程方法。立体网络在可重构处理器DAPDNA-2上实现。近年来,流量工程对QoS (Quality of Service)的保证问题进行了广泛的研究。该方案是一种基于片上立体网络的实验解决方案,其中虚拟链路和虚拟节点由一些pe(处理元素)构成。通过片上立体网络中虚拟数据包的交换行为,得到了真实的流量波动情况。在本文中,作为实现我们最终目标的第一次尝试,我们实现了diorama网络并确定了基本路径计算,这两个函数都是我们算法的基本函数。立体网络流量工程可以实现更复杂的网络设计,如自适应流量均衡或多度量设计。
{"title":"A novel traffic engineering method using on-chip diorama network on dynamically reconfigurable processor DAPDNA-2","authors":"Shan Gao, T. Kihara, S. Shimizu, Y. Arakawa, N. Yamanaka, Akifumi Watanabey","doi":"10.1109/HPSR.2009.5307432","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307432","url":null,"abstract":"This paper proposes a novel traffic engineering method using on-chip diorama network that consists of virtual nodes and virtual links. The diorama network is implemented on reconfigurable processor DAPDNA-2. In these years, traffic engineering has widely researched to guarantee QoS (Quality of Service). The proposal is an experimental solution with the onchip diorama network, where virtual links and virtual nodes are constructed by some PEs (processing elements). We obtain the realistic traffic fluctuation through the behavior of virtual packets exchanged on the on-chip diorama network. In this paper, as first trial to achieve our final goal, we implemented diorama network and confirmed basic path calculation, where both functions are an essential function of our algorithm. The diorama network traffic engineering can realize more sophisticated network design like adaptive traffic balancing or multi-metric design.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115058750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Packet processing with blocking for bursty traffic on multi-thread network processor 多线程网络处理器上突发流量的分组阻塞处理
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307419
Yeim-Kuan Chang, Fang-Chen Kuo
It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.
众所周知,在网络流量中存在突发访问。这意味着路由器通常同时接收到具有相同有意义的报头的数据包。对于这样的流量,路由器通常执行相同的计算并重复访问相同的内存位置。为了利用网络流量的这一特性,提出了许多缓存方案来处理突发访问模式。然而,在基于多线程网络处理器的路由器中,现有的缓存方案不适合突发流量。由于所有线程可能都处理具有相同报头的数据包,如果前一个线程还没有更新缓存条目,则后续线程由于缓存遗漏而仍然需要重复计算。本文提出了一种针对多线程网络处理器的缓存方案,称为B-cache。B-cache阻止后续线程执行前一个线程正在处理的相同计算。通过B-cache的应用,对于IP地址查找、报文分类、入侵检测等具有高度局部性特征的报文处理任务,可以避免重复计算,从而获得更高的报文处理速率。我们在Intel IXP2400网络处理器上实现了所提出的B-cache方案,实验结果表明我们的B-cache方案能够达到Intel IXP2400的线速。
{"title":"Packet processing with blocking for bursty traffic on multi-thread network processor","authors":"Yeim-Kuan Chang, Fang-Chen Kuo","doi":"10.1109/HPSR.2009.5307419","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307419","url":null,"abstract":"It is well-known that there are bursty accesses in network traffic. It means a burst of packets with the same meaningful headers are usually received by routers at the same time. With such traffic, routers usually perform the same computations and access the same memory location repeatedly. To utilize this characteristic of network traffic, many cache schemes are proposed to deal with the bursty access patterns. However, in the multi-thread network processor based routers, the existing cache schemes will not suit to the bursty traffic. Since all threads may all deal with the packets with the same headers, if the former threads do not update the cache entries yet, the subsequent threads still have to repeat the computations due to the cache miss. In this paper, we propose a cache scheme called B-cache for the multi-thread network processors. B-cache blocks the subsequent threads from doing the same computations which are being processed by the former thread. By applying B-cache, any packet processing tasks with high locality characteristic, such as IP address lookup, packet classification, and intrusion detection, can avoid the duplicate computations and hence achieve a better packet processing rate. We implement the proposed B-cache scheme on Intel IXP2400 network processor, the experimental results shows that our B-cache scheme can achieves the line speed of Intel IXP2400.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128525044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scalability of ROADMs for multiple parallel fibers 多平行光纤roadm的可扩展性
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307431
C. Meusburger, D. Schupke
Growing traffic loads in communication networks demand for cost-efficient multi-degree switching architectures. Implementing reconfigurable optical add/drop multiplexers (ROADMs) offers cost-efficient and highly flexible solutions in terms of wavelength and direction reconfigurability. First, we give an overview of existing ROADM architectures by introducing a classification matrix. In the second part of the paper, we propose and analyze a concept, which can overcome scalability limitations and decrease the cost of ROADMs connected by parallel fiber pairs. This concept is based on the elimination of switching components, while preserving the most important flexibility advantages of a ROADM.
通信网络中日益增长的通信量对经济高效的多度交换架构提出了更高的要求。实现可重构的光加/丢复用器(roadm)在波长和方向可重构性方面提供了经济高效且高度灵活的解决方案。首先,我们通过引入分类矩阵对现有ROADM体系结构进行概述。在论文的第二部分,我们提出并分析了一个概念,该概念可以克服并行光纤对连接的roadm的可扩展性限制并降低成本。这个概念基于消除开关组件,同时保留ROADM最重要的灵活性优势。
{"title":"Scalability of ROADMs for multiple parallel fibers","authors":"C. Meusburger, D. Schupke","doi":"10.1109/HPSR.2009.5307431","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307431","url":null,"abstract":"Growing traffic loads in communication networks demand for cost-efficient multi-degree switching architectures. Implementing reconfigurable optical add/drop multiplexers (ROADMs) offers cost-efficient and highly flexible solutions in terms of wavelength and direction reconfigurability. First, we give an overview of existing ROADM architectures by introducing a classification matrix. In the second part of the paper, we propose and analyze a concept, which can overcome scalability limitations and decrease the cost of ROADMs connected by parallel fiber pairs. This concept is based on the elimination of switching components, while preserving the most important flexibility advantages of a ROADM.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131663315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure and scalable optical access network using PLZT high-speed optical switches 采用PLZT高速光交换机的安全、可扩展的光接入网
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307433
Kazumasa Tokuhashi, Kunitaka Ashizawa, D. Ishii, Y. Arakawa, N. Yamanaka, K. Wakayama
We are proposing a new optical access network architecture using PLZT 10 nsec high-speed optical switches, called Active Optical Network (ActiON). Our architecture can increase four times the number of subscribers as well as double the distance between OLT and ONUs than GE-PON. Moreover, a user can establish secure communication in ActiON. Because ActiON uses not broadcasting system but slot based switching system. ActiON has to overcome a Discovery process issue because an optical switch can not broadcast. In this paper, we propose a new Discovery process technique. Through simulation results and experimental results, we show that our proposed architecture is able to complete Discovery process with accuracy.
我们提出了一种使用plzt10nsec高速光交换机的新型光接入网架构,称为有源光网络(ActiON)。与GE-PON相比,我们的架构可以增加4倍的用户数量,并将OLT和onu之间的距离增加一倍。此外,用户可以在ActiON中建立安全通信。因为ActiON使用的不是广播系统,而是基于插槽的交换系统。ActiON必须克服发现过程的问题,因为光交换机不能广播。本文提出了一种新的发现过程技术。仿真结果和实验结果表明,我们提出的架构能够准确地完成发现过程。
{"title":"Secure and scalable optical access network using PLZT high-speed optical switches","authors":"Kazumasa Tokuhashi, Kunitaka Ashizawa, D. Ishii, Y. Arakawa, N. Yamanaka, K. Wakayama","doi":"10.1109/HPSR.2009.5307433","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307433","url":null,"abstract":"We are proposing a new optical access network architecture using PLZT 10 nsec high-speed optical switches, called Active Optical Network (ActiON). Our architecture can increase four times the number of subscribers as well as double the distance between OLT and ONUs than GE-PON. Moreover, a user can establish secure communication in ActiON. Because ActiON uses not broadcasting system but slot based switching system. ActiON has to overcome a Discovery process issue because an optical switch can not broadcast. In this paper, we propose a new Discovery process technique. Through simulation results and experimental results, we show that our proposed architecture is able to complete Discovery process with accuracy.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"334 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133544089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Multicast scheduling in feedback-based two-stage switch 基于反馈的两级交换机组播调度
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307438
Bing Hu, K. Yeung
Scalability is of paramount importance in high-speed switch design. Two limiting factors are the complexity of switch fabric and the need for a sophisticated central scheduler. In this paper, we focus on designing a scalable multicast switch. Given the fact that the majority traffic on the Internet is unicast, a costeffective solution is to adopt a unicast switch fabric for handling both unicast and multicast traffic. Unlike existing approaches, we choose to base our multicast switch design on the load-balanced two-stage switch architecture because it does not require a central scheduler, and its unicast switch fabric only needs to realize N switch configurations. Specifically, we adopt the feedback-based two-stage switch architecture [10], because it elegantly solves the notorious packet mis-sequencing problem, and yet renders an excellent throughput-delay performance. By slightly modifying the operation of the original feedback-based two-stage switch, a simple distributed multicast scheduling algorithm is proposed. Simulation results show that with packet duplication at both input ports and middle-stage ports, the proposed multicast scheduling algorithm significantly cuts down the average packet delay and delay variation among different copies of the same multicast packet.
在高速交换机设计中,可扩展性是至关重要的。两个限制因素是交换结构的复杂性和需要一个复杂的中央调度程序。在本文中,我们重点设计了一个可扩展的组播交换机。考虑到Internet上的大多数流量是单播的这一事实,一个有效的解决方案是采用单播交换机结构来处理单播和多播流量。与现有的方法不同,我们选择基于负载均衡的两阶段交换机架构的多播交换机设计,因为它不需要中央调度程序,而且它的单播交换机结构只需要实现N个交换机配置。具体来说,我们采用了基于反馈的两级交换机架构[10],因为它优雅地解决了臭名昭著的数据包错排序问题,同时又提供了出色的吞吐量延迟性能。通过对原有的基于反馈的两级交换的操作进行轻微修改,提出了一种简单的分布式组播调度算法。仿真结果表明,在输入端口和中期端口都存在数据包重复的情况下,所提出的组播调度算法显著降低了平均数据包延迟和同一组播数据包不同副本之间的延迟变化。
{"title":"Multicast scheduling in feedback-based two-stage switch","authors":"Bing Hu, K. Yeung","doi":"10.1109/HPSR.2009.5307438","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307438","url":null,"abstract":"Scalability is of paramount importance in high-speed switch design. Two limiting factors are the complexity of switch fabric and the need for a sophisticated central scheduler. In this paper, we focus on designing a scalable multicast switch. Given the fact that the majority traffic on the Internet is unicast, a costeffective solution is to adopt a unicast switch fabric for handling both unicast and multicast traffic. Unlike existing approaches, we choose to base our multicast switch design on the load-balanced two-stage switch architecture because it does not require a central scheduler, and its unicast switch fabric only needs to realize N switch configurations. Specifically, we adopt the feedback-based two-stage switch architecture [10], because it elegantly solves the notorious packet mis-sequencing problem, and yet renders an excellent throughput-delay performance. By slightly modifying the operation of the original feedback-based two-stage switch, a simple distributed multicast scheduling algorithm is proposed. Simulation results show that with packet duplication at both input ports and middle-stage ports, the proposed multicast scheduling algorithm significantly cuts down the average packet delay and delay variation among different copies of the same multicast packet.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134017787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive routing for Convergence Enhanced Ethernet 增强收敛以太网的自适应路由
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307439
C. Minkenberg, A. Scicchitano, M. Gusat
A significant drive to consolidate data center networks on a single infrastructure is taking place. 10-Gigabit Ethernet is one of the contenders to fulfill the role of universal data center interconnect. One of the key features missing from conventional Ethernet is congestion management; this void is being filled by the standardization work of the IEEE 802.1Qau working group. However, the schemes under consideration react to congestion only at the sources by reducing the transmission rates of “hot” flows, i.e., those detected as contributing to congestion. This approach ignores a crucial aspect of many data center networks, namely, that there typically are multiple paths between any pair of end nodes. Before reducing transmission rates, it would make sense to look for an alternative, uncongested path first. Here, we propose an adaptive routing scheme that builds—in a fully transparent way—on top of the existing 802.1Qau schemes, by snooping the congestion notification frames to modify the routing behavior of the switching nodes. We demonstrate how this can lead to significant performance improvements by taking full advantage of path diversity.
在单一基础设施上整合数据中心网络的重要推动力正在发生。万兆以太网是实现通用数据中心互连作用的竞争者之一。传统以太网缺少的一个关键特性是拥塞管理;IEEE 802.1Qau工作组的标准化工作正在填补这一空白。然而,考虑中的方案仅在源处对拥塞作出反应,通过降低“热”流的传输速率,即那些被检测为导致拥塞的流。这种方法忽略了许多数据中心网络的一个关键方面,即在任何一对终端节点之间通常存在多条路径。在降低传输速率之前,首先寻找另一条不拥堵的路径是有意义的。在这里,我们提出了一种自适应路由方案,它通过窥探拥塞通知帧来修改交换节点的路由行为,以一种完全透明的方式构建在现有的802.1Qau方案之上。我们将演示如何通过充分利用路径多样性来显著提高性能。
{"title":"Adaptive routing for Convergence Enhanced Ethernet","authors":"C. Minkenberg, A. Scicchitano, M. Gusat","doi":"10.1109/HPSR.2009.5307439","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307439","url":null,"abstract":"A significant drive to consolidate data center networks on a single infrastructure is taking place. 10-Gigabit Ethernet is one of the contenders to fulfill the role of universal data center interconnect. One of the key features missing from conventional Ethernet is congestion management; this void is being filled by the standardization work of the IEEE 802.1Qau working group. However, the schemes under consideration react to congestion only at the sources by reducing the transmission rates of “hot” flows, i.e., those detected as contributing to congestion. This approach ignores a crucial aspect of many data center networks, namely, that there typically are multiple paths between any pair of end nodes. Before reducing transmission rates, it would make sense to look for an alternative, uncongested path first. Here, we propose an adaptive routing scheme that builds—in a fully transparent way—on top of the existing 802.1Qau schemes, by snooping the congestion notification frames to modify the routing behavior of the switching nodes. We demonstrate how this can lead to significant performance improvements by taking full advantage of path diversity.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130490298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An implementation and experimental study of the Adaptive PI Rate Control Protocol 自适应PI速率控制协议的实现与实验研究
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307424
Zhanliang Liu, Huan Wang, Wu Hu, Yang Hong, O. Yang, Y. Shu
Adaptive PI Rate Control Protocol (API-RCP) has been proposed as an explicit congestion control mechanism [1]. In this paper, a Linux-based testbed is created to validate the features and performance of API-RCP. Various issues on implementing API-RCP in Linux kernel are addressed. There are two outcomes of our experimental results: a) Confirm that in practice API-RCP is as effective as the theory asserts in achieving a high and smooth sending rate, a small and smooth round trip time (RTT), an almost full link utilization and zero packet drop, and b) Demonstrate that API-RCP exhibits good robustness to varying network environment with changing flows or network delays. All these support that API-RCP is a very viable candidate in the real network implementations.
自适应PI速率控制协议(API-RCP)被提出作为一种显式拥塞控制机制[1]。本文建立了一个基于linux的测试平台来验证API-RCP的特性和性能。讨论了在Linux内核中实现API-RCP的各种问题。我们的实验结果有两个结果:a)证实在实践中API-RCP与理论断言的一样有效,可以实现高而平滑的发送速率,小而平滑的往返时间(RTT),几乎完全的链路利用率和零丢包,b)证明API-RCP对变化的流量或网络延迟的不同网络环境具有良好的鲁棒性。这些都证明了API-RCP在实际网络实现中是一个非常可行的候选方案。
{"title":"An implementation and experimental study of the Adaptive PI Rate Control Protocol","authors":"Zhanliang Liu, Huan Wang, Wu Hu, Yang Hong, O. Yang, Y. Shu","doi":"10.1109/HPSR.2009.5307424","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307424","url":null,"abstract":"Adaptive PI Rate Control Protocol (API-RCP) has been proposed as an explicit congestion control mechanism [1]. In this paper, a Linux-based testbed is created to validate the features and performance of API-RCP. Various issues on implementing API-RCP in Linux kernel are addressed. There are two outcomes of our experimental results: a) Confirm that in practice API-RCP is as effective as the theory asserts in achieving a high and smooth sending rate, a small and smooth round trip time (RTT), an almost full link utilization and zero packet drop, and b) Demonstrate that API-RCP exhibits good robustness to varying network environment with changing flows or network delays. All these support that API-RCP is a very viable candidate in the real network implementations.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115922782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Frugal IP lookup based on a parallel search 基于并行搜索的低成本IP查找
Pub Date : 2009-06-22 DOI: 10.1109/HPSR.2009.5307435
Z. Čiča, A. Smiljanic
Lookup function in the IP routers has always been a topic of a great interest since it represents a potential bottleneck in improving Internet router's capacity. IP lookup stands for the search of the longest matching prefix in the lookup table for the given destination IP address. The lookup process must be fast in order to support increasing port bit-rates and the number of IP addresses. The lookup table updates must be also performed fast because they happen frequently. In this paper, we propose a new algorithm based on the parallel search implemented on the FPGA chip that finds the next hop information in the external memory. The lookup algorithm must support both the existing IPv4 protocol, as well as the future IPv6 protocol. We analyze the performance of the designed algorithm, and compare it with the existing lookup algorithms. Our proposed algorithm allows a fast search because it is parallelized within the FPGA chip. Also, it utilizes the memory more efficiently than other algorithms because it does not use the resources for the empty subtrees. The update process that the proposed algorithm performs is as fast as the search process. The proposed algorithm will be implemented and analyzed for both IPv4 and IPv6. It will be shown that it supports IPv6 effectively.
IP路由器中的查找功能一直是人们非常感兴趣的话题,因为它代表了提高互联网路由器容量的潜在瓶颈。IP查找表示在查找表中查找给定目的IP地址的最长匹配前缀。查找过程必须快速,以便支持不断增加的端口比特率和IP地址数量。查找表的更新也必须快速执行,因为它们经常发生。本文提出了一种在FPGA芯片上实现的基于并行搜索的新算法,该算法在外部存储器中查找下一跳信息。查找算法必须既支持现有的IPv4协议,又支持未来的IPv6协议。分析了所设计算法的性能,并与现有的查找算法进行了比较。我们提出的算法允许快速搜索,因为它在FPGA芯片内并行化。此外,它比其他算法更有效地利用内存,因为它不为空子树使用资源。该算法执行的更新过程与搜索过程一样快。该算法将在IPv4和IPv6下实现和分析。它将显示它有效地支持IPv6。
{"title":"Frugal IP lookup based on a parallel search","authors":"Z. Čiča, A. Smiljanic","doi":"10.1109/HPSR.2009.5307435","DOIUrl":"https://doi.org/10.1109/HPSR.2009.5307435","url":null,"abstract":"Lookup function in the IP routers has always been a topic of a great interest since it represents a potential bottleneck in improving Internet router's capacity. IP lookup stands for the search of the longest matching prefix in the lookup table for the given destination IP address. The lookup process must be fast in order to support increasing port bit-rates and the number of IP addresses. The lookup table updates must be also performed fast because they happen frequently. In this paper, we propose a new algorithm based on the parallel search implemented on the FPGA chip that finds the next hop information in the external memory. The lookup algorithm must support both the existing IPv4 protocol, as well as the future IPv6 protocol. We analyze the performance of the designed algorithm, and compare it with the existing lookup algorithms. Our proposed algorithm allows a fast search because it is parallelized within the FPGA chip. Also, it utilizes the memory more efficiently than other algorithms because it does not use the resources for the empty subtrees. The update process that the proposed algorithm performs is as fast as the search process. The proposed algorithm will be implemented and analyzed for both IPv4 and IPv6. It will be shown that it supports IPv6 effectively.","PeriodicalId":251545,"journal":{"name":"2009 International Conference on High Performance Switching and Routing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2009 International Conference on High Performance Switching and Routing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1