{"title":"Microserver + micro-switch = micro-datacenter","authors":"F. Abel, A. Doering","doi":"10.1145/3073763.3073772","DOIUrl":null,"url":null,"abstract":"Many computational workloads from commercial and scientific fields have high demands in total throughput, and energy efficiency. For example the largest radio telescope, to be built in South Africa and Australia combines cost, performance and power targets that cannot be met by the technological development until its installation. In processor architecture a design tradeoff between cost and power efficiency against single-thread performance is observed. Hence, to achieve a high system power efficiency, large-scale parallelism has to be employed. In order to maintain wire length, and hence network delays, energy losses, and cost, the volume of compute nodes and network switches has to be reduced to a minimum âĂŞ hence the term microserver. Our DOME microserver compute card measures 130 by 7.5 by 65 mm3. The presented switch module is confined to the same area (140 by 55 mm2), yet is deeper (40mm) because of the 630-pin high-speed connector. For 64 ports of 10Gbit Ethernet (10Gbase-KR) our switch consumes about 150W maximal. In addition to the switch ASIC (Intel FM6000 series), the power converters, clock generation, configuration memory and management processor is integrated on a second PCB. The switch management (âĂIJControl PointâĂİ) is implemented in a separate compute node. In the talk options to integrate the management into the switch (same volume as now) will be discussed. Another topic covered is the cooling of the microserver, and of the switch in particular, using (warm) water in the infrastructure and heat pipes on the module.","PeriodicalId":20560,"journal":{"name":"Proceedings of the 2nd International Workshop on Advanced Interconnect Solutions and Technologies for Emerging Computing Systems","volume":"94 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2nd International Workshop on Advanced Interconnect Solutions and Technologies for Emerging Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3073763.3073772","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Many computational workloads from commercial and scientific fields have high demands in total throughput, and energy efficiency. For example the largest radio telescope, to be built in South Africa and Australia combines cost, performance and power targets that cannot be met by the technological development until its installation. In processor architecture a design tradeoff between cost and power efficiency against single-thread performance is observed. Hence, to achieve a high system power efficiency, large-scale parallelism has to be employed. In order to maintain wire length, and hence network delays, energy losses, and cost, the volume of compute nodes and network switches has to be reduced to a minimum âĂŞ hence the term microserver. Our DOME microserver compute card measures 130 by 7.5 by 65 mm3. The presented switch module is confined to the same area (140 by 55 mm2), yet is deeper (40mm) because of the 630-pin high-speed connector. For 64 ports of 10Gbit Ethernet (10Gbase-KR) our switch consumes about 150W maximal. In addition to the switch ASIC (Intel FM6000 series), the power converters, clock generation, configuration memory and management processor is integrated on a second PCB. The switch management (âĂIJControl PointâĂİ) is implemented in a separate compute node. In the talk options to integrate the management into the switch (same volume as now) will be discussed. Another topic covered is the cooling of the microserver, and of the switch in particular, using (warm) water in the infrastructure and heat pipes on the module.