{"title":"Data center transport in the zettabyte IP network","authors":"L. Paraschis, Sudhir Modali","doi":"10.1109/PHOSST.2010.5553697","DOIUrl":null,"url":null,"abstract":"Data centers have been evolving to meet the requirements for scale, and flexible service delivery with the most efficient resource utilization (CapEx), and operational simplicity (OpEx), including notably power management (which is important for both CapEx and OpEx). In many respects, the data center architecture has been closely following the computing paradigm, moving from a centralized design in the era of mainframe computing, to decentralized designs with the advent of client-server and distributed computing [1]. The scaling of these decentralized designs however has been increasingly challenging due to the interconnectivity and fiber-management needs (Figure 1), leading to complex configurations (top-of-rack, end-of-row, environmental, etc.) in order to meet environmental constraints. At the same time the cost for power and cooling has been dramatically increasing, currently often exceeds the actually server cost [1, 2]. Significant advancements in: 1) stateless computing, 2) consolidated switching fabric, combining both Ethernet and Storage transport, and 3) photonics for 10/40/100GE interconnectivity technologies, have recently enabled the evolution towards a new converged data center architecture. Figure 2 shows the main layer of this new architecture. An access switching layer interconnecting the different applications servers, is aggregated in a consolidated core switching layer that could also combine the important (typically also hierarchical) storage infrastructure (using the new FCoE standard). The application virtualization, and consolidated switch fabric advance significantly the operational efficiency of the ever increasing need for computationally intensive applications. At the same the advancements in the price performance of 10GE and emerging 40 and 100 GE optical interconnections have dramatically improved the capacity scalability, and infrastructure cost (CapEx). These innovations have also enabled significant power-efficiency improvements. For example the CXP optics modules would offer 100GE interconnectivity (up to 2km) with more than 10x improved power consumption when compared with the 1.2 Watts per Gb/s of the GBIC GE technology [3].","PeriodicalId":440419,"journal":{"name":"IEEE Photonics Society Summer Topicals 2010","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Photonics Society Summer Topicals 2010","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PHOSST.2010.5553697","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Data centers have been evolving to meet the requirements for scale, and flexible service delivery with the most efficient resource utilization (CapEx), and operational simplicity (OpEx), including notably power management (which is important for both CapEx and OpEx). In many respects, the data center architecture has been closely following the computing paradigm, moving from a centralized design in the era of mainframe computing, to decentralized designs with the advent of client-server and distributed computing [1]. The scaling of these decentralized designs however has been increasingly challenging due to the interconnectivity and fiber-management needs (Figure 1), leading to complex configurations (top-of-rack, end-of-row, environmental, etc.) in order to meet environmental constraints. At the same time the cost for power and cooling has been dramatically increasing, currently often exceeds the actually server cost [1, 2]. Significant advancements in: 1) stateless computing, 2) consolidated switching fabric, combining both Ethernet and Storage transport, and 3) photonics for 10/40/100GE interconnectivity technologies, have recently enabled the evolution towards a new converged data center architecture. Figure 2 shows the main layer of this new architecture. An access switching layer interconnecting the different applications servers, is aggregated in a consolidated core switching layer that could also combine the important (typically also hierarchical) storage infrastructure (using the new FCoE standard). The application virtualization, and consolidated switch fabric advance significantly the operational efficiency of the ever increasing need for computationally intensive applications. At the same the advancements in the price performance of 10GE and emerging 40 and 100 GE optical interconnections have dramatically improved the capacity scalability, and infrastructure cost (CapEx). These innovations have also enabled significant power-efficiency improvements. For example the CXP optics modules would offer 100GE interconnectivity (up to 2km) with more than 10x improved power consumption when compared with the 1.2 Watts per Gb/s of the GBIC GE technology [3].