Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734442
Yanming Shen, S. Panwar, H. J. Chao
Crosspoint buffered switches are emerging as the focus of research in high-speed routers. They have simpler scheduling algorithms, and achieve better performance than a bufferless crossbar switch. Crosspoint buffered switches have a buffer at each crosspoint. A cell is first delivered to a crosspoint buffer, and then transferred to the output port. With a speedup of two, a crosspoint buffered switch has previously been proved to provide 100% throughput. In this paper, we propose a 100% throughput scheduling algorithm without speedup, called SQUID. With this design, each input/output keeps track of the previously served virtual output queues (VOQs)/crosspoint buffers. We prove that SQUID, with a time complexity of O(log N), can achieve 100% throughput without any speedup. Our simulation results also show a delay performance comparable to outputqueued switches. We also present a novel queuing model that models crosspoint buffered switches under uniform traffic.
{"title":"A low complexity scheduling algorithm for a crosspoint buffered switch with 100% throughput","authors":"Yanming Shen, S. Panwar, H. J. Chao","doi":"10.1109/HSPR.2008.4734442","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734442","url":null,"abstract":"Crosspoint buffered switches are emerging as the focus of research in high-speed routers. They have simpler scheduling algorithms, and achieve better performance than a bufferless crossbar switch. Crosspoint buffered switches have a buffer at each crosspoint. A cell is first delivered to a crosspoint buffer, and then transferred to the output port. With a speedup of two, a crosspoint buffered switch has previously been proved to provide 100% throughput. In this paper, we propose a 100% throughput scheduling algorithm without speedup, called SQUID. With this design, each input/output keeps track of the previously served virtual output queues (VOQs)/crosspoint buffers. We prove that SQUID, with a time complexity of O(log N), can achieve 100% throughput without any speedup. Our simulation results also show a delay performance comparable to outputqueued switches. We also present a novel queuing model that models crosspoint buffered switches under uniform traffic.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124341022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734456
Sugang Xu, H. Harai
In this paper we consider the future grid systems built in a large scale open lightpath optical network environment. According to userspsila needs on high performance distributed computation, a group of dedicated lightpaths among the distributed computer resources forming the userspsila on-demand optical grid networks are established dynamically and efficiently providing the high-quality communication capability for users. In such a large scale dynamic and concurrent open network environment, in order to create the best network environment for userspsila applications, we propose an incremental improvement approach to the joint optimization problem of process/processor mapping and logical topology design. Through evaluations, it is shown that this approach is effective and flexible to find the alternative solutions when lightpath provisioning blocking occurs during the userspsila lightpath network construction. In addition, we present two signaling system designs for automatic construction of the users-oriented on-demand optical grid network.
{"title":"An approach to users-oriented on demand optical grid network construction in large scale open pptical networks","authors":"Sugang Xu, H. Harai","doi":"10.1109/HSPR.2008.4734456","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734456","url":null,"abstract":"In this paper we consider the future grid systems built in a large scale open lightpath optical network environment. According to userspsila needs on high performance distributed computation, a group of dedicated lightpaths among the distributed computer resources forming the userspsila on-demand optical grid networks are established dynamically and efficiently providing the high-quality communication capability for users. In such a large scale dynamic and concurrent open network environment, in order to create the best network environment for userspsila applications, we propose an incremental improvement approach to the joint optimization problem of process/processor mapping and logical topology design. Through evaluations, it is shown that this approach is effective and flexible to find the alternative solutions when lightpath provisioning blocking occurs during the userspsila lightpath network construction. In addition, we present two signaling system designs for automatic construction of the users-oriented on-demand optical grid network.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132435695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734421
A. Khan, R. Birke, D. Manjunath, A. Sahoo, A. Bianco
Recent research in the different functional areas of modern routers have made proposals that can greatly increase the efficiency of these machines. Most of these proposals can be implemented quickly and often efficiently in software. We wish to use personal computers as forwarders in a network to utilize the advances made by researchers. We therefore examine the ability of a personal computer to act as a router. We analyze the performance of a single general purpose computer and show that I/O is the primary bottleneck. We then study the performance of distributed router composed of multiple general purpose computers. We study the performance of a star topology and through experimental results we show that although its performance is good, it lacks flexibility in its design. We compare it with a multistage architecture. We conclude with a proposal for an architecture that provides us with a forwarder that is both flexible and scalable.
{"title":"Distributed PC based routers: Bottleneck analysis and architecture proposal","authors":"A. Khan, R. Birke, D. Manjunath, A. Sahoo, A. Bianco","doi":"10.1109/HSPR.2008.4734421","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734421","url":null,"abstract":"Recent research in the different functional areas of modern routers have made proposals that can greatly increase the efficiency of these machines. Most of these proposals can be implemented quickly and often efficiently in software. We wish to use personal computers as forwarders in a network to utilize the advances made by researchers. We therefore examine the ability of a personal computer to act as a router. We analyze the performance of a single general purpose computer and show that I/O is the primary bottleneck. We then study the performance of distributed router composed of multiple general purpose computers. We study the performance of a star topology and through experimental results we show that although its performance is good, it lacks flexibility in its design. We compare it with a multistage architecture. We conclude with a proposal for an architecture that provides us with a forwarder that is both flexible and scalable.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114522834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734417
E.M. Al Sukhni, H. Mouftah
This paper presents a novel distributed protocol for online provisioning in survivable mesh-based wavelength-division multiplexed (WDM) networks. This protocol examines each of the k link disjoint shortest paths as a candidate working path and as a candidate shared protection path at the same time, in parallel which gives the destination node the ability to apply an intelligent adaptive routing and wavelength assignment. Moreover, this protocol is the first distributed protocol can provision the working and the backup paths in parallel. We discuss in details a control and management techniques to set up and tear down connections and determine restoration capacity shareability in a distributed manner. Since only local information is maintained at each node, protocol scalability is guaranteed. The significant contribution of this protocol in terms of connection request blocking probability and connection setup time are discussed. We show through setup time analysis and simulation experiments the effectiveness of the proposed protocol.
{"title":"Parallel distributed lightpath control and management for survivable optical mesh networks","authors":"E.M. Al Sukhni, H. Mouftah","doi":"10.1109/HSPR.2008.4734417","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734417","url":null,"abstract":"This paper presents a novel distributed protocol for online provisioning in survivable mesh-based wavelength-division multiplexed (WDM) networks. This protocol examines each of the k link disjoint shortest paths as a candidate working path and as a candidate shared protection path at the same time, in parallel which gives the destination node the ability to apply an intelligent adaptive routing and wavelength assignment. Moreover, this protocol is the first distributed protocol can provision the working and the backup paths in parallel. We discuss in details a control and management techniques to set up and tear down connections and determine restoration capacity shareability in a distributed manner. Since only local information is maintained at each node, protocol scalability is guaranteed. The significant contribution of this protocol in terms of connection request blocking probability and connection setup time are discussed. We show through setup time analysis and simulation experiments the effectiveness of the proposed protocol.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123519118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734454
M. Chaitou, J.-L. Le Roux
This paper proposes fast reroute extensions to RSVP-TE in order to support the protection of multipoint to multipoint (MP2MP) MPLS TE-tunnels. To protect an element (link or node) of a primary MP2MP TE-tunnel, we propose to use an MP2MP bypass TE-tunnel connecting a set of nodes around the protected element. During failure the primary MP2MP TE-tunnel is encapsulated into the MP2MP bypass TE-tunnel which calls for defining a new type of MPLS hierarchy, the multipoint to multipoint hierarchy. The node of the primary MP2MP TE-tunnel upstream to the protected element, called the upstream protecting node (UPN), selects the MP2MP bypass TE-tunnel to be used for the protection. By extending the point to multipoint MPLS hierarchy, which relies on the upstream label assignment, we discuss several extensions scenarios depending on the number of leaves of a bypass TE-tunnel and on the number of UPNs per bypass tunnel. The scalability/bandwidth-consumption tradeoff between these schemes is analyzed by means of simulations. The proposed method can be efficiently used for the protection of point-to-point and point-to-multipoint TE-tunnels as well, as such tunnels are actually particular cases of MP2MP TE-tunnels.
{"title":"Fast-reroute extensions for multi-point to multi-point MPLS tunnels","authors":"M. Chaitou, J.-L. Le Roux","doi":"10.1109/HSPR.2008.4734454","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734454","url":null,"abstract":"This paper proposes fast reroute extensions to RSVP-TE in order to support the protection of multipoint to multipoint (MP2MP) MPLS TE-tunnels. To protect an element (link or node) of a primary MP2MP TE-tunnel, we propose to use an MP2MP bypass TE-tunnel connecting a set of nodes around the protected element. During failure the primary MP2MP TE-tunnel is encapsulated into the MP2MP bypass TE-tunnel which calls for defining a new type of MPLS hierarchy, the multipoint to multipoint hierarchy. The node of the primary MP2MP TE-tunnel upstream to the protected element, called the upstream protecting node (UPN), selects the MP2MP bypass TE-tunnel to be used for the protection. By extending the point to multipoint MPLS hierarchy, which relies on the upstream label assignment, we discuss several extensions scenarios depending on the number of leaves of a bypass TE-tunnel and on the number of UPNs per bypass tunnel. The scalability/bandwidth-consumption tradeoff between these schemes is analyzed by means of simulations. The proposed method can be efficiently used for the protection of point-to-point and point-to-multipoint TE-tunnels as well, as such tunnels are actually particular cases of MP2MP TE-tunnels.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734458
M. Savasini, P. Monti, M. Tacca, A. Fumagalli, H. Waldman
Optical signal regenerators (3R) are required to overcome the adverse effect of fiber and other transmission impairments. 3R units may be placed either at every node (full placement) or at some selected nodes (sparse placement) of the optical network. It has been argued [1] that while the latter placement strategy may not be optimal in terms of the total number of 3R units required to support a given set of static traffic demands, it offers a number of practical advantages over the former, e.g., a contained complexity of network management in terms of signaling overhead. In this paper the full and sparse placement strategies are compared in a dynamic optical network, whereby lightpaths are set up and torn down to best fit the offered changing demands. The study shows that the blocking probability due to the lack of available 3R units achieved by the sparse placement strategy may be comparable to the one achieved by the full placement strategy. Surprisingly, it may even be lower in some cases, thus providing an additional motivation in favor of the sparse placement strategy. The study also shows that the algorithm used to choose the nodes where to place the 3R units must be designed carefully. Two placement algorithms are compared, reporting differences in signaling overhead level as high as 6 times (when achieving a desired level of lightpath connectivity) and differences in blocking probabilities as high as two orders of magnitude (when using the same level of signaling overhead).
{"title":"Trading network management complexity for blocking probability when placing optical regenerators","authors":"M. Savasini, P. Monti, M. Tacca, A. Fumagalli, H. Waldman","doi":"10.1109/HSPR.2008.4734458","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734458","url":null,"abstract":"Optical signal regenerators (3R) are required to overcome the adverse effect of fiber and other transmission impairments. 3R units may be placed either at every node (full placement) or at some selected nodes (sparse placement) of the optical network. It has been argued [1] that while the latter placement strategy may not be optimal in terms of the total number of 3R units required to support a given set of static traffic demands, it offers a number of practical advantages over the former, e.g., a contained complexity of network management in terms of signaling overhead. In this paper the full and sparse placement strategies are compared in a dynamic optical network, whereby lightpaths are set up and torn down to best fit the offered changing demands. The study shows that the blocking probability due to the lack of available 3R units achieved by the sparse placement strategy may be comparable to the one achieved by the full placement strategy. Surprisingly, it may even be lower in some cases, thus providing an additional motivation in favor of the sparse placement strategy. The study also shows that the algorithm used to choose the nodes where to place the 3R units must be designed carefully. Two placement algorithms are compared, reporting differences in signaling overhead level as high as 6 times (when achieving a desired level of lightpath connectivity) and differences in blocking probabilities as high as two orders of magnitude (when using the same level of signaling overhead).","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123150142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734437
Andrea Bianco, Luca Giraudo, A. Scicchitano
Multicast traffic in storage area networks (SANs) enables applications such as disaster recovery, remote data replication and distributed multimedia systems, in which a server access concurrently multiple storage devices or, conversely, multiple servers access data on a single device. Thus, an asynchronous loss-less switching architecture devised for SANs is described, and its performance under multicast traffic is studied. Simulations are used to analyze switch performance under various traffic patterns and schedulers. Although most of the simulations refer to a specific switch architecture, performance results highlight interesting general trends in flow controlled asynchronous architectures. These architectures could be used effectively also in a more traditional data switching and routing scenario. In this case, multicast support becomes essential to support multimedia QoS aware applications and protocols heavily relying on the broadcast property of LANs.
在san (storage area networks)中,多播流量可以用于容灾、远程数据复制和分布式多媒体系统等应用,即一台服务器可以同时访问多个存储设备,或者多台服务器可以访问单个设备上的数据。为此,提出了一种面向san的异步无损交换体系结构,并对其在组播业务下的性能进行了研究。仿真分析了交换机在各种流量模式和调度程序下的性能。尽管大多数模拟都涉及特定的开关体系结构,但性能结果突出了流控制异步体系结构中有趣的一般趋势。这些架构也可以在更传统的数据交换和路由场景中有效地使用。在这种情况下,多播支持对于支持严重依赖局域网广播特性的多媒体QoS感知应用和协议至关重要。
{"title":"Asynchronous SAN switching under multicast traffic","authors":"Andrea Bianco, Luca Giraudo, A. Scicchitano","doi":"10.1109/HSPR.2008.4734437","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734437","url":null,"abstract":"Multicast traffic in storage area networks (SANs) enables applications such as disaster recovery, remote data replication and distributed multimedia systems, in which a server access concurrently multiple storage devices or, conversely, multiple servers access data on a single device. Thus, an asynchronous loss-less switching architecture devised for SANs is described, and its performance under multicast traffic is studied. Simulations are used to analyze switch performance under various traffic patterns and schedulers. Although most of the simulations refer to a specific switch architecture, performance results highlight interesting general trends in flow controlled asynchronous architectures. These architectures could be used effectively also in a more traditional data switching and routing scenario. In this case, multicast support becomes essential to support multimedia QoS aware applications and protocols heavily relying on the broadcast property of LANs.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117172615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734446
Olivier Morandi, Fulvio Risso, Pierluigi Rolando, O. Hagsand, Peter Ekdahl
Systolic array network processors represent an effective alternative to ASICs for the design of multi-gigabit packet switching and forwarding devices because of their flexibility, high aggregate throughput and deterministic worst-case performances. However such advantages come at the expense of some limitations, given both by the specific characteristics of the pipelined architecture and by the lack of support for portable high-level languages in the software development tools, forcing software engineers to deal with low level aspects of the underlying hardware platform. In this paper we present a set of techniques that have been implemented in the Network Virtual Machine (NetVM) compiler infrastructure for mapping general layer 2-3 packet processing applications on the Xelerated X11 systolic-array network processor. In particular we demonstrate that our compiler is able to effectively exploit the available hardware resources and to generate code that is comparable to hand-written one, hence ensuring excellent throughput performances.
{"title":"Mapping packet processing applications on a systolic array network processor","authors":"Olivier Morandi, Fulvio Risso, Pierluigi Rolando, O. Hagsand, Peter Ekdahl","doi":"10.1109/HSPR.2008.4734446","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734446","url":null,"abstract":"Systolic array network processors represent an effective alternative to ASICs for the design of multi-gigabit packet switching and forwarding devices because of their flexibility, high aggregate throughput and deterministic worst-case performances. However such advantages come at the expense of some limitations, given both by the specific characteristics of the pipelined architecture and by the lack of support for portable high-level languages in the software development tools, forcing software engineers to deal with low level aspects of the underlying hardware platform. In this paper we present a set of techniques that have been implemented in the Network Virtual Machine (NetVM) compiler infrastructure for mapping general layer 2-3 packet processing applications on the Xelerated X11 systolic-array network processor. In particular we demonstrate that our compiler is able to effectively exploit the available hardware resources and to generate code that is comparable to hand-written one, hence ensuring excellent throughput performances.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"17 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115245137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734415
Xiaojun Hei, Shan Chen, B. Bensaou, Danny H. K. Tsang
The self-congestion probing, which estimates bandwidth by controlling a temporal congestion of a probing stream, is the most popular approach in bandwidth measurement. Self-congestion tools are easy to implement, fast to converge and are robust to network dynamics with reasonably good accuracy; however, the current tools only exploit parts of the congestion signals, though the probing stream experiences a rich spectrum of congestion signals. TCP protocols follow the same self-congestion principle on inferring available bandwidth. We propose a unified self-congestion probing framework by bridging the self-congestion probing for available bandwidth measurement and TCP congestion control. The recent progress of TCP congestion control in both theory and experimentation provides new avenues in improving the current self-congestion tools. Based on this unified framework, we design and evaluate a simple available bandwidth probing scheme to utilize the explicit congestion notification signal, namely, ECNProbe. We conduct a measurement study on a Linux-based testbed and evaluate the performance of several available bandwidth measurement tools. We demonstrate that the proposed ECNProbe significantly improves the measurement accuracy with small convergence time and low overhead probing.
{"title":"Towards unified self-congestion probing for bandwidth measurement","authors":"Xiaojun Hei, Shan Chen, B. Bensaou, Danny H. K. Tsang","doi":"10.1109/HSPR.2008.4734415","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734415","url":null,"abstract":"The self-congestion probing, which estimates bandwidth by controlling a temporal congestion of a probing stream, is the most popular approach in bandwidth measurement. Self-congestion tools are easy to implement, fast to converge and are robust to network dynamics with reasonably good accuracy; however, the current tools only exploit parts of the congestion signals, though the probing stream experiences a rich spectrum of congestion signals. TCP protocols follow the same self-congestion principle on inferring available bandwidth. We propose a unified self-congestion probing framework by bridging the self-congestion probing for available bandwidth measurement and TCP congestion control. The recent progress of TCP congestion control in both theory and experimentation provides new avenues in improving the current self-congestion tools. Based on this unified framework, we design and evaluate a simple available bandwidth probing scheme to utilize the explicit congestion notification signal, namely, ECNProbe. We conduct a measurement study on a Linux-based testbed and evaluate the performance of several available bandwidth measurement tools. We demonstrate that the proposed ECNProbe significantly improves the measurement accuracy with small convergence time and low overhead probing.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128367605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-05-15DOI: 10.1109/HSPR.2008.4734424
K. Vlachos, W. Kabaciński, S. Wêclewski
We present two architectures for implementing optical buffers. Both use multi-wavelength selective elements like quantum dot semiconductor optical amplifiers (QD-SOAs) as multi-wavelength converters and fixed-length delay lines that are combined to form both an output queuing and a parallel buffer switch design. The output queuing buffer design requires less active devices (QD-SOA) when implementing large buffers, but the parallel buffer design becomes more profitable, when the number of wavelength channels that can be simultaneously processed by the wavelength selective switches (QD-SOAs) increases. This is because the number of active devices depends only on the buffer size. We also proposed scheduling algorithm to resolve packet contention in parallel buffer architecture and carried out a simulation considering mean packet delay, maximum buffer occupancy and packet loss probability.
{"title":"New architectures for optical packet switching using QD-SOAs for multi-wavelength buffering","authors":"K. Vlachos, W. Kabaciński, S. Wêclewski","doi":"10.1109/HSPR.2008.4734424","DOIUrl":"https://doi.org/10.1109/HSPR.2008.4734424","url":null,"abstract":"We present two architectures for implementing optical buffers. Both use multi-wavelength selective elements like quantum dot semiconductor optical amplifiers (QD-SOAs) as multi-wavelength converters and fixed-length delay lines that are combined to form both an output queuing and a parallel buffer switch design. The output queuing buffer design requires less active devices (QD-SOA) when implementing large buffers, but the parallel buffer design becomes more profitable, when the number of wavelength channels that can be simultaneously processed by the wavelength selective switches (QD-SOAs) increases. This is because the number of active devices depends only on the buffer size. We also proposed scheduling algorithm to resolve packet contention in parallel buffer architecture and carried out a simulation considering mean packet delay, maximum buffer occupancy and packet loss probability.","PeriodicalId":130484,"journal":{"name":"2008 International Conference on High Performance Switching and Routing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115717362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}