{"title":"Seymour Cray award lecture","authors":"Tadashi Watanabe","doi":"10.1145/1188455.1188664","DOIUrl":"https://doi.org/10.1145/1188455.1188664","url":null,"abstract":"","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122342043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the past decade, commodity computing and Linux have helped to significantly transform supercomputing. Fueled by the open source model, collaboration of the supercomputing community has had far reaching affects on enterprise computing. In his talk, Matthew Szulik (Chairman and CEO of Red Hat) will draw parallels between open source trends in supercomputing and the advancement of enterprise computing. As we look ahead, he will discuss how meeting the future's computing challenges will require faster innovation driven by better collaboration.
{"title":"Open source software: a powerful model for inspiring imagination","authors":"Matthew J. Szulik","doi":"10.1145/1188455.1188661","DOIUrl":"https://doi.org/10.1145/1188455.1188661","url":null,"abstract":"Over the past decade, commodity computing and Linux have helped to significantly transform supercomputing. Fueled by the open source model, collaboration of the supercomputing community has had far reaching affects on enterprise computing. In his talk, Matthew Szulik (Chairman and CEO of Red Hat) will draw parallels between open source trends in supercomputing and the advancement of enterprise computing. As we look ahead, he will discuss how meeting the future's computing challenges will require faster innovation driven by better collaboration.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This BOF will continue debate revolving around productivity metrics for supercomputers. At several recent user forums, consensus emerged that it is not possible to develop petascale applications without interactive access to thousands of processors. But most large systems are managed via a batch scheduler with long (and unpredictable) queue wait times. Most batch scheduler policies assume high system utilization as "good". But high utilization dilates average queue wait time and increases wait-time unpredictability, both of which are "bad" for application developer's productivity. What are the options to address these conflicting implications for running a supercomputer at high system utilization? Is it possible to manage a supercomputer to meet the high-throughput demands of stable applications and the on-demand access requirements of large-scale code developers concurrently? Or do these two usage scenarios inherently conflict? Participants will explain and debate several creative solutions that could enable high throughput and high availability for program development.
{"title":"Is 99% utilization of a supercomputer a good thing?","authors":"A. Snavely, J. Kepner","doi":"10.1145/1188455.1188493","DOIUrl":"https://doi.org/10.1145/1188455.1188493","url":null,"abstract":"This BOF will continue debate revolving around productivity metrics for supercomputers. At several recent user forums, consensus emerged that it is not possible to develop petascale applications without interactive access to thousands of processors. But most large systems are managed via a batch scheduler with long (and unpredictable) queue wait times. Most batch scheduler policies assume high system utilization as \"good\". But high utilization dilates average queue wait time and increases wait-time unpredictability, both of which are \"bad\" for application developer's productivity. What are the options to address these conflicting implications for running a supercomputer at high system utilization? Is it possible to manage a supercomputer to meet the high-throughput demands of stable applications and the on-demand access requirements of large-scale code developers concurrently? Or do these two usage scenarios inherently conflict? Participants will explain and debate several creative solutions that could enable high throughput and high availability for program development.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125510853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Combustion process is at the root of most energy production systems. The understanding of combustion is fundamental to exploit efficiently the available natural resources and to reduce pollutant emissions. The giant leaps performed in computer science over the past two decades render possible the use of computer simulation to better understand combustion in real industrial configurations. This presentation discusses and illustrates the application of high performance computing for Computational Fluid Dynamics (CFD). Specific attention is addressed to the Large Eddy Simulation (LES) approach for industrial energy production configurations: ranging from aeronautical gas turbine engines including helicopters and commercial airliners, piston engines and stationary gas turbine engines used in large scale electricity production systems.
{"title":"High performance computing for combustion applications","authors":"G. Staffelbach","doi":"10.1145/1188455.1188514","DOIUrl":"https://doi.org/10.1145/1188455.1188514","url":null,"abstract":"Combustion process is at the root of most energy production systems. The understanding of combustion is fundamental to exploit efficiently the available natural resources and to reduce pollutant emissions. The giant leaps performed in computer science over the past two decades render possible the use of computer simulation to better understand combustion in real industrial configurations. This presentation discusses and illustrates the application of high performance computing for Computational Fluid Dynamics (CFD). Specific attention is addressed to the Large Eddy Simulation (LES) approach for industrial energy production configurations: ranging from aeronautical gas turbine engines including helicopters and commercial airliners, piston engines and stationary gas turbine engines used in large scale electricity production systems.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126411637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Twenty five years ago supercomputing was dominated by vector processors and emergent SIMD array processors clocked at tens of Megahertz. Today responding to dramatic advances in semiconductor device fabrication technologies, the world of supercomputing is dominated by multi-core based MPP and commodity cluster systems clocked at Gigahertz. Twenty five years in the future, the technology landscape will again have experienced dramatic change with the flat-lining of Moore's Law, the realization of nanoscale devices, and the emergence of potentially alien technologies, architectures, and paradigms. If Moore's Law were to continue to progress as before, we would be deploying systems approaching 100 Exaflops with clock rates nearing a Terahertz. But by then, power constraints, quantum effects, or our inability to exploit trillion way program parallelism may have forced us in to entirely new realms of processing. This presentation will consider the range of alternative technologies, architectures, and methods that may drive the extremes of computing beyond the current incremental steps of the current era.
{"title":"Beyond the beyond and the extremes of computing","authors":"T. Sterling","doi":"10.1145/1188455.1188524","DOIUrl":"https://doi.org/10.1145/1188455.1188524","url":null,"abstract":"Twenty five years ago supercomputing was dominated by vector processors and emergent SIMD array processors clocked at tens of Megahertz. Today responding to dramatic advances in semiconductor device fabrication technologies, the world of supercomputing is dominated by multi-core based MPP and commodity cluster systems clocked at Gigahertz. Twenty five years in the future, the technology landscape will again have experienced dramatic change with the flat-lining of Moore's Law, the realization of nanoscale devices, and the emergence of potentially alien technologies, architectures, and paradigms. If Moore's Law were to continue to progress as before, we would be deploying systems approaching 100 Exaflops with clock rates nearing a Terahertz. But by then, power constraints, quantum effects, or our inability to exploit trillion way program parallelism may have forced us in to entirely new realms of processing. This presentation will consider the range of alternative technologies, architectures, and methods that may drive the extremes of computing beyond the current incremental steps of the current era.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"65 31","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120817912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SRFS on Ether adds an ethernet interface to the Shared Rapid File System (SRFS) that is currently used as a distributed file system between nodes by the HPC-system. It can be used like NFS and has solved the problem of data coherency in the high-speed transmission of data in a broadband environment, which NFS has not. Moreover, adjustment of the TCP/IP parameters in the OS to improve speed is unnecessary, and special hardware is not needed, unlike with the SAN construction by iFCP and others. For additional speed, it stripes data streams automatically (default MAX 8 streams), switches protocols between TCP and UDP based on IOsize.In this bandwidth challenge, we demonstrate security using a host-to-host IPSec connection between Tampa and Tokyo. To show performance, we used a hardware IPSec accelerator and tuned TCP/IP with SRFS on Ether's.
SRFS on Ether为当前hpc系统在节点间使用的分布式文件系统SRFS (Shared Rapid File System)增加了一个以太网接口。它可以像NFS一样使用,并且解决了在宽带环境下高速传输数据时的数据一致性问题,这是NFS所没有的。此外,不需要调整操作系统中的TCP/IP参数来提高速度,也不需要特殊的硬件,这与iFCP等构建SAN不同。为了获得额外的速度,它自动划分数据流(默认最大8个流),基于IOsize在TCP和UDP之间切换协议。在这个带宽挑战中,我们使用坦帕和东京之间的主机对主机IPSec连接来演示安全性。为了展示性能,我们使用了硬件IPSec加速器,并在以太网上使用SRFS调优TCP/IP。
{"title":"Secure file sharing","authors":"N. Fujita, H. Ohkawa","doi":"10.1145/1188455.1188707","DOIUrl":"https://doi.org/10.1145/1188455.1188707","url":null,"abstract":"SRFS on Ether adds an ethernet interface to the Shared Rapid File System (SRFS) that is currently used as a distributed file system between nodes by the HPC-system. It can be used like NFS and has solved the problem of data coherency in the high-speed transmission of data in a broadband environment, which NFS has not. Moreover, adjustment of the TCP/IP parameters in the OS to improve speed is unnecessary, and special hardware is not needed, unlike with the SAN construction by iFCP and others. For additional speed, it stripes data streams automatically (default MAX 8 streams), switches protocols between TCP and UDP based on IOsize.In this bandwidth challenge, we demonstrate security using a host-to-host IPSec connection between Tampa and Tokyo. To show performance, we used a hardware IPSec accelerator and tuned TCP/IP with SRFS on Ether's.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"77 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120825139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Innovative cabling solutions will be a key factor in realizing next generation supercomputing clusters. Demands for higher data rates, larger clusters, and increased density cannot be optimally addressed with existing twin-axial cabling solutions. Quellan, Inc.'s family of low power, low latency Lane Manager ICs provide a 2x reach extension over standard cable for single lane data rates up to 6.25 Gb/s. In addition, the Lane Managers can facilitate increased density and improved airflow through clusters by enabling narrow gauge cables to operate at maximum lengths comparable to that of standard 24AWG cabling. Integrated higher layer features ensure compliance with a variety of current and emerging standards such as Infiniband, PCI Express, and CX-4. This presentation will highlight the performance and advanced features set of the Lane Manager family while also detailing the benefits of this technology for addressing various signal integrity challenges inherent to the cabling infrastructure of supercomputing clusters.
{"title":"Enabling next generation supercomputing clusters","authors":"M. Vrazel","doi":"10.1145/1188455.1188732","DOIUrl":"https://doi.org/10.1145/1188455.1188732","url":null,"abstract":"Innovative cabling solutions will be a key factor in realizing next generation supercomputing clusters. Demands for higher data rates, larger clusters, and increased density cannot be optimally addressed with existing twin-axial cabling solutions. Quellan, Inc.'s family of low power, low latency Lane Manager ICs provide a 2x reach extension over standard cable for single lane data rates up to 6.25 Gb/s. In addition, the Lane Managers can facilitate increased density and improved airflow through clusters by enabling narrow gauge cables to operate at maximum lengths comparable to that of standard 24AWG cabling. Integrated higher layer features ensure compliance with a variety of current and emerging standards such as Infiniband, PCI Express, and CX-4. This presentation will highlight the performance and advanced features set of the Lane Manager family while also detailing the benefits of this technology for addressing various signal integrity challenges inherent to the cabling infrastructure of supercomputing clusters.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127400828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Post-secondary and corporate programs in science and engineering have traditionally made good use of internship opportunities to identify, mentor and develop students in their specialties. Computer and computational sciences, especially in academia and research laboratories may not be making as effective use of internship opportunities. In discussing why they do not consider including students in their development projects, staff and management often cite personal lack of time, mentoring skills, and money as barriers to what they would consider an effective internship initiative.This BOF will provide an opportunity for students, supervisors, and other interested people to talk about how to identify internship opportunities and deal with perceived barriers that inhibit sites from offering pre-service opportunities to high school and college students. Discussions will focus on sharing successful strategies for capitalizing on the enthusiasm and availability of the next wave of scientists and technologists.
{"title":"Internships and mentoring in high performance computing environments","authors":"L. McGinnis","doi":"10.1145/1188455.1188498","DOIUrl":"https://doi.org/10.1145/1188455.1188498","url":null,"abstract":"Post-secondary and corporate programs in science and engineering have traditionally made good use of internship opportunities to identify, mentor and develop students in their specialties. Computer and computational sciences, especially in academia and research laboratories may not be making as effective use of internship opportunities. In discussing why they do not consider including students in their development projects, staff and management often cite personal lack of time, mentoring skills, and money as barriers to what they would consider an effective internship initiative.This BOF will provide an opportunity for students, supervisors, and other interested people to talk about how to identify internship opportunities and deal with perceived barriers that inhibit sites from offering pre-service opportunities to high school and college students. Discussions will focus on sharing successful strategies for capitalizing on the enthusiasm and availability of the next wave of scientists and technologists.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129161766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V2 = E - (M): Increased Velocity results directly from subtracting rotating Mass of storage devices from your application's Energywww.ViON.com/HyperStor - application of advanced memory devices to dramatically enhance cluster systems response.Exchange milliseconds for microseconds (1,000 fold improvement) to gain depth, breadth or simply reduce your response time. Pinpoint application of ViON's new HyperStor memory devices to HPC infrastructure will result in performance improvements ranging from 100% to 1500%, depending on your system's architecture. By employing the latest technology against the "weakest link" in your architecture, you can realize any of the above referenced benefits, or in many cases, a mix of them all.
V2 = E - (M):速度的提高直接来自于从应用程序的能量中减去存储设备的旋转质量www.ViON.com/HyperStor -应用先进的存储设备来显着增强集群系统的响应。将毫秒转换为微秒(改进1000倍),以获得深度、广度或缩短响应时间。根据您的系统架构,将ViON的新型HyperStor存储设备精确应用于HPC基础设施将带来100%到1500%的性能提升。通过采用最新的技术来对付体系结构中的“最薄弱环节”,您可以实现上面提到的任何好处,或者在许多情况下,实现所有好处的混合。
{"title":"Advanced memory devices to enhance cluster performance","authors":"Mike Jones","doi":"10.1145/1188455.1188753","DOIUrl":"https://doi.org/10.1145/1188455.1188753","url":null,"abstract":"V2 = E - (M): Increased Velocity results directly from subtracting rotating Mass of storage devices from your application's Energywww.ViON.com/HyperStor - application of advanced memory devices to dramatically enhance cluster systems response.Exchange milliseconds for microseconds (1,000 fold improvement) to gain depth, breadth or simply reduce your response time. Pinpoint application of ViON's new HyperStor memory devices to HPC infrastructure will result in performance improvements ranging from 100% to 1500%, depending on your system's architecture. By employing the latest technology against the \"weakest link\" in your architecture, you can realize any of the above referenced benefits, or in many cases, a mix of them all.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133625080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"Clustered Storage is becoming pervasive and is a major paradigm shift from previous generations of storage products, much like when CDs made records obsolete." ~ Tony Asaro, Enterprise Strategy Group, Oct 2005The purpose of this presentation is to introduce you to a new paradigm shift that is currently taking place in the data storage industry: the movement toward Clustered Storage architectures. Distributed storage clustering for the data storage industry is in much the same position today that IBM was in 1981, poised to change the rules of the computer industry. Clustered Storage architectures are changing the rules of how data is stored and accessed. In this paper we will discuss the trends that clearly define clustered storage architectures as the future of data storage, detail the requirements of this new category of storage, and introduce the Isilon® IQ clustered storage solution which is the first to deliver on the promises of this paradigm shift.
“集群存储正变得越来越普遍,与前几代存储产品相比,这是一种重大的范式转变,就像cd使唱片过时一样。”~ Tony Asaro, Enterprise Strategy Group, 2005年10月本演讲的目的是向您介绍当前数据存储行业中正在发生的一种新的范式转变:向集群存储架构的转变。今天,数据存储行业的分布式存储集群所处的位置与1981年IBM所处的位置非常相似,即准备改变计算机行业的规则。集群存储架构正在改变数据存储和访问的规则。在本文中,我们将讨论将集群存储架构明确定义为数据存储的未来的趋势,详细介绍这种新存储类别的需求,并介绍Isilon®IQ集群存储解决方案,这是第一个实现这种范式转变的承诺。
{"title":"Paradigm shift in the data storage industry","authors":"Sujal Patel","doi":"10.1145/1188455.1188725","DOIUrl":"https://doi.org/10.1145/1188455.1188725","url":null,"abstract":"\"Clustered Storage is becoming pervasive and is a major paradigm shift from previous generations of storage products, much like when CDs made records obsolete.\" ~ Tony Asaro, Enterprise Strategy Group, Oct 2005The purpose of this presentation is to introduce you to a new paradigm shift that is currently taking place in the data storage industry: the movement toward Clustered Storage architectures. Distributed storage clustering for the data storage industry is in much the same position today that IBM was in 1981, poised to change the rules of the computer industry. Clustered Storage architectures are changing the rules of how data is stored and accessed. In this paper we will discuss the trends that clearly define clustered storage architectures as the future of data storage, detail the requirements of this new category of storage, and introduce the Isilon® IQ clustered storage solution which is the first to deliver on the promises of this paradigm shift.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116789756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}