A. Gothandaraman, G. L. Warren, G. D. Peterson, R. Harrison
Recent advances in FPGA technology make them an attractive platform for accelerating scientific computing applications. We present a novel hardware accelerator for Quantum Monte Carlo simulations in N-body systems. The design is deeply pipelined and exploits the inherent fine-grained parallelism available using an FPGA for all calculations. The design is implemented on a Xilinx Virtex II Pro XC2VP30 device and preliminary results indicate a maximum operating frequency of 100MHz. A single instance of our design offers an estimated speedup of 20x and accuracy comparable to the serial code running on a 2.8GHz Intel Pentium 4 processor. This architecture performs all computations with fixed-point representation and delivers accuracy on the order of or better than double-precision floating point. After deploying a single instance on the present FPGA platform, targeting our design on the Cray XD1 platform with a high gate-density FPGA will allow us to operate multiple cores in parallel.
FPGA技术的最新进展使其成为加速科学计算应用的有吸引力的平台。提出了一种用于n体系统中量子蒙特卡罗模拟的新型硬件加速器。该设计是深度流水线的,并利用FPGA可用于所有计算的固有细粒度并行性。该设计在Xilinx Virtex II Pro XC2VP30设备上实现,初步结果表明最大工作频率为100MHz。我们设计的一个实例提供了大约20倍的加速和精度,可与2.8GHz英特尔奔腾4处理器上运行的串行代码相媲美。该体系结构使用定点表示执行所有计算,并提供与双精度浮点数相同或更好的精度。在现有的FPGA平台上部署单个实例后,针对我们在Cray XD1平台上的设计,使用高栅极密度FPGA将允许我们并行操作多个内核。
{"title":"Reconfigurable accelerator for quantum Monte Carlo simulations in N-body systems","authors":"A. Gothandaraman, G. L. Warren, G. D. Peterson, R. Harrison","doi":"10.1145/1188455.1188638","DOIUrl":"https://doi.org/10.1145/1188455.1188638","url":null,"abstract":"Recent advances in FPGA technology make them an attractive platform for accelerating scientific computing applications. We present a novel hardware accelerator for Quantum Monte Carlo simulations in N-body systems. The design is deeply pipelined and exploits the inherent fine-grained parallelism available using an FPGA for all calculations. The design is implemented on a Xilinx Virtex II Pro XC2VP30 device and preliminary results indicate a maximum operating frequency of 100MHz. A single instance of our design offers an estimated speedup of 20x and accuracy comparable to the serial code running on a 2.8GHz Intel Pentium 4 processor. This architecture performs all computations with fixed-point representation and delivers accuracy on the order of or better than double-precision floating point. After deploying a single instance on the present FPGA platform, targeting our design on the Cray XD1 platform with a high gate-density FPGA will allow us to operate multiple cores in parallel.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131804619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploring novel computer system designs requires modeling the complex interactions between processor, memory, and network. The Structural Simulation Toolkit (SST) has been developed to explore innovations in both the programming models and hardware implementation of highly concurrent systems. The Toolkit's modular design allows extensive exploration of system parameters while maximizing code reuse and provides an explicit separation of instruction interpretation from microarchitectural timing. This is built upon a high performance hybrid discrete event framework. The SST has modeled a variety of systems, from processor-in-memory to CMP and MPP. It has examined a variety of hardware and software issues in the context of HPC.This poster presents an overview of the SST. Several of its models for processors, memory systems, and networks will be detailed. Its software stack, including support for MPI and OpenMP, will also be covered. Performance results and current directions for the SST will also be shown.
{"title":"The structural simulation toolkit: exploring novel architectures","authors":"Arun Rodrigues, R. Murphy, P. Kogge, K. Underwood","doi":"10.1145/1188455.1188618","DOIUrl":"https://doi.org/10.1145/1188455.1188618","url":null,"abstract":"Exploring novel computer system designs requires modeling the complex interactions between processor, memory, and network. The Structural Simulation Toolkit (SST) has been developed to explore innovations in both the programming models and hardware implementation of highly concurrent systems. The Toolkit's modular design allows extensive exploration of system parameters while maximizing code reuse and provides an explicit separation of instruction interpretation from microarchitectural timing. This is built upon a high performance hybrid discrete event framework. The SST has modeled a variety of systems, from processor-in-memory to CMP and MPP. It has examined a variety of hardware and software issues in the context of HPC.This poster presents an overview of the SST. Several of its models for processors, memory systems, and networks will be detailed. Its software stack, including support for MPI and OpenMP, will also be covered. Performance results and current directions for the SST will also be shown.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129758297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Open MPI Project is a growing community surrounding an open source implementation of the Message Passing Interface, developed in a collaboration between research institutions, HPC vendors, and US national laboratories. Open MPI has evolved into a high performance, scalable MPI implementation, providing a highly modular architecture that not only adapts to a wide variety of environments, but also uniquely lends itself to HPC research.The meeting will consist of three parts:1. Members of the Open MPI core development team will be presenting the current status of Open MPI. 2. Discuss possible future directions for Open MPI, actively soliciting feedback from real-world MPI users and ISVs with MPI-based products. 3. Active discussion about how HPC researchers can leverage the Open MPI project for their own work, and how MPI users can obtain the best performance out of real-world applications.
Open MPI项目是一个不断发展的社区,围绕着消息传递接口的开源实现,由研究机构、HPC供应商和美国国家实验室合作开发。Open MPI已经发展成为一种高性能、可扩展的MPI实现,提供了一个高度模块化的架构,不仅适应各种环境,而且还独特地适用于HPC研究。会议将由三个部分组成:Open MPI核心开发团队的成员将介绍Open MPI的当前状态。2. 讨论Open MPI未来可能的发展方向,积极征求实际MPI用户和基于MPI产品的独立软件开发商的反馈。3.关于HPC研究人员如何利用Open MPI项目进行自己的工作,以及MPI用户如何从实际应用程序中获得最佳性能的积极讨论。
{"title":"Open MPI community meeting","authors":"J. Squyres, Brian W. Barrett","doi":"10.1145/1188455.1188461","DOIUrl":"https://doi.org/10.1145/1188455.1188461","url":null,"abstract":"The Open MPI Project is a growing community surrounding an open source implementation of the Message Passing Interface, developed in a collaboration between research institutions, HPC vendors, and US national laboratories. Open MPI has evolved into a high performance, scalable MPI implementation, providing a highly modular architecture that not only adapts to a wide variety of environments, but also uniquely lends itself to HPC research.The meeting will consist of three parts:1. Members of the Open MPI core development team will be presenting the current status of Open MPI. 2. Discuss possible future directions for Open MPI, actively soliciting feedback from real-world MPI users and ISVs with MPI-based products. 3. Active discussion about how HPC researchers can leverage the Open MPI project for their own work, and how MPI users can obtain the best performance out of real-world applications.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"96 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129943375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liquid Computing has delivered the world's first Interconnect Driven Server. The company's flagship product, LiquidIQ(tm), merges computing and communications resources into a scalable system that delivers sustained performance throughput. It also introduces entirely new manageability and control characteristics that cannot be achieved with today's commodity servers, switching equipment and software overlays. The underlying IQInterconnect(tm) delivers highly redundant communications across multiple chassis by addressing key routing and load balancing limitations of common interconnect technologies. LiquidIQ server instances are defined in software; computing, memory and I/O resources can be combined from any location in the system without performance degradation. LiquidIQ provides the flexibility to support both large SMP and cluster operations using MPI and UPC. This session will review the specifics of its integrated, optimized and controlled system architecture and provide session attendees with performance results from its beta program with several leading high performance computing users.
{"title":"Introducing LiquidIQ: a next generation system for high performance computing","authors":"Mike Kemp","doi":"10.1145/1188455.1188762","DOIUrl":"https://doi.org/10.1145/1188455.1188762","url":null,"abstract":"Liquid Computing has delivered the world's first Interconnect Driven Server. The company's flagship product, LiquidIQ(tm), merges computing and communications resources into a scalable system that delivers sustained performance throughput. It also introduces entirely new manageability and control characteristics that cannot be achieved with today's commodity servers, switching equipment and software overlays. The underlying IQInterconnect(tm) delivers highly redundant communications across multiple chassis by addressing key routing and load balancing limitations of common interconnect technologies. LiquidIQ server instances are defined in software; computing, memory and I/O resources can be combined from any location in the system without performance degradation. LiquidIQ provides the flexibility to support both large SMP and cluster operations using MPI and UPC. This session will review the specifics of its integrated, optimized and controlled system architecture and provide session attendees with performance results from its beta program with several leading high performance computing users.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134340988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High temperature superconductors could potentially revolutionize the use and transmission of electric power. This along with intriguing scientific questions have motivated an enormous research effort over the past twenty years, since the discovery of high temperature superconducting cuprates. But only recently, with the advent of massively parallel vector supercomputers and quantum cluster methods to solve the quantum many-body problem, has it become possible to simulate superconductivity in models such as the 2D Hubbard model that are thought to describe the cuprate superconductors. In this presentation, we will discuss the recent progress we have made in understanding superconductivity in the Hubbard model, as well as what seem to be the key ingredients for the development of materials specific extensions that will eventually allow predictive simulations of the superconducting transition temperature. We will also discuss the computational challenges we face with materials specific simulations of superconductivity.
{"title":"Toward material-specific simulations of high temperature superconductivity","authors":"T. Schulthess","doi":"10.1145/1188455.1188521","DOIUrl":"https://doi.org/10.1145/1188455.1188521","url":null,"abstract":"High temperature superconductors could potentially revolutionize the use and transmission of electric power. This along with intriguing scientific questions have motivated an enormous research effort over the past twenty years, since the discovery of high temperature superconducting cuprates. But only recently, with the advent of massively parallel vector supercomputers and quantum cluster methods to solve the quantum many-body problem, has it become possible to simulate superconductivity in models such as the 2D Hubbard model that are thought to describe the cuprate superconductors. In this presentation, we will discuss the recent progress we have made in understanding superconductivity in the Hubbard model, as well as what seem to be the key ingredients for the development of materials specific extensions that will eventually allow predictive simulations of the superconducting transition temperature. We will also discuss the computational challenges we face with materials specific simulations of superconductivity.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132130937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This tutorial is about advanced use of MPI, in particular the parallel I/O and one-sided communication features added in MPI-2. Implementations are now available both from vendors and from open-source projects so that these MPI-2 capabilities can now really be used in practice. The tutorial will be heavily example-driven. For each example we introduce concepts, describe the problem being solved, then walk through the code and its execution.Examples were chosen to cover scenarios seen in real applications, such as 1D and 2D mesh decomposition, checkpointing of sparse data structures, and providing atomic access to shared memory data structures. Attendees will leave the tutorial with both an understanding of these advanced concepts and a collection of working example codes that they are familiar with and have seen in action. This will prepare them for applying these concepts in their own applications.
{"title":"Advanced MPI: I/O and one-sided communication","authors":"W. Gropp, E. Lusk, R. Thakur, R. Ross","doi":"10.1145/1188455.1188666","DOIUrl":"https://doi.org/10.1145/1188455.1188666","url":null,"abstract":"This tutorial is about advanced use of MPI, in particular the parallel I/O and one-sided communication features added in MPI-2. Implementations are now available both from vendors and from open-source projects so that these MPI-2 capabilities can now really be used in practice. The tutorial will be heavily example-driven. For each example we introduce concepts, describe the problem being solved, then walk through the code and its execution.Examples were chosen to cover scenarios seen in real applications, such as 1D and 2D mesh decomposition, checkpointing of sparse data structures, and providing atomic access to shared memory data structures. Attendees will leave the tutorial with both an understanding of these advanced concepts and a collection of working example codes that they are familiar with and have seen in action. This will prepare them for applying these concepts in their own applications.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132228819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This workshop will cover advances and innovations in graphic processor unit (GPU) capabilities and functionality into nontraditional, general-purpose computing as an adjunct vector/matrix processor. Examples include game physics, image processing, scientific computing, sorting and database query processing, to name a few.This workshop consists of invited speakers and poster presenters who provide insights into GP2 practice and experience gained over the last several years. Both perspectives are crucial to understanding upcoming and future heterogeneous and homogeneous multi-core processor architectures. Developing and adapting software to exploit the GPU's embarassingly parallel capabilities presented numerous implementation challenges, resulting in a variety of approaches to integrating an attached processor with a CPU to achieve high problem throughput. Similar software development and integration challenges face the multi-core processor future, building on the GP2 foundation.
{"title":"General-purpose GPU computing: practice and experience","authors":"B. S. Michel","doi":"10.1145/1188455.1188698","DOIUrl":"https://doi.org/10.1145/1188455.1188698","url":null,"abstract":"This workshop will cover advances and innovations in graphic processor unit (GPU) capabilities and functionality into nontraditional, general-purpose computing as an adjunct vector/matrix processor. Examples include game physics, image processing, scientific computing, sorting and database query processing, to name a few.This workshop consists of invited speakers and poster presenters who provide insights into GP2 practice and experience gained over the last several years. Both perspectives are crucial to understanding upcoming and future heterogeneous and homogeneous multi-core processor architectures. Developing and adapting software to exploit the GPU's embarassingly parallel capabilities presented numerous implementation challenges, resulting in a variety of approaches to integrating an attached processor with a CPU to achieve high problem throughput. Similar software development and integration challenges face the multi-core processor future, building on the GP2 foundation.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133069639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. McCaulay, Matthew R. Link, George W. Turner, David Y. Hancock, Maria Morris, C. Stewart
Indiana University's 20.48 Teraflop IBM e1350 BladeCenter supercomputer system ("Big Red") has been made available to researchers throughout the U.S. via the NSF-funded TeraGrid. The Big Red system finished 23rd on the June 2006 Top500 list, making it at that time the fastest supercomputer owned and operated by a U.S. university and also the fastest computer to be made available through the TeraGrid to date.Big Red is a distributed shared-memory cluster, consisting of 512 IBM BladeCenter JS21s, each with two dual-core PowerPC 970 MP processors (2.5GHz), 8GB of ECC PC3200 SDRAM, 72GB local SATA disk for scratch space, 360 TB GPFS parallel filesystem and a PCI-X Myrinet 2000 adapter for high-bandwidth, low-latency MPI applications.A significant portion of Big Red will be allocated to TeraGrid utilization. The TeraGrid is the National Science Foundation's flagship effort to create a national cyberinfrastructure to support academic research and promote scientific discovery.
{"title":"Powerful new research computing system available via the TeraGrid","authors":"D. McCaulay, Matthew R. Link, George W. Turner, David Y. Hancock, Maria Morris, C. Stewart","doi":"10.1145/1188455.1188624","DOIUrl":"https://doi.org/10.1145/1188455.1188624","url":null,"abstract":"Indiana University's 20.48 Teraflop IBM e1350 BladeCenter supercomputer system (\"Big Red\") has been made available to researchers throughout the U.S. via the NSF-funded TeraGrid. The Big Red system finished 23rd on the June 2006 Top500 list, making it at that time the fastest supercomputer owned and operated by a U.S. university and also the fastest computer to be made available through the TeraGrid to date.Big Red is a distributed shared-memory cluster, consisting of 512 IBM BladeCenter JS21s, each with two dual-core PowerPC 970 MP processors (2.5GHz), 8GB of ECC PC3200 SDRAM, 72GB local SATA disk for scratch space, 360 TB GPFS parallel filesystem and a PCI-X Myrinet 2000 adapter for high-bandwidth, low-latency MPI applications.A significant portion of Big Red will be allocated to TeraGrid utilization. The TeraGrid is the National Science Foundation's flagship effort to create a national cyberinfrastructure to support academic research and promote scientific discovery.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115139116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effectively using the Grid computing environment requires a working knowledge of several interacting layers of technology, a potentially daunting task for the new user. GRIDS Center presents a hands-on, full-day tutorial moving the new Grid user through a guided series of activities introducing concepts such as the Globus toolkit, Grid security certificates, reliable file transfer, simple job management, and workflow management. The tutorial introduces essential skills that will be needed to conduct and support use of the Grid computing environment. Hands-on exercises will be presented using a small dedicated grid hosted for this purpose at NCSA.Reflecting attendee feedback from the SC 2005 workshop, we will include more material on grid security and account management tools and revise and extend workflow management examples.
{"title":"Introduction to grid computing: the first steps","authors":"David Gehrig, D. M. Freemon, J. Frey","doi":"10.1145/1188455.1188682","DOIUrl":"https://doi.org/10.1145/1188455.1188682","url":null,"abstract":"Effectively using the Grid computing environment requires a working knowledge of several interacting layers of technology, a potentially daunting task for the new user. GRIDS Center presents a hands-on, full-day tutorial moving the new Grid user through a guided series of activities introducing concepts such as the Globus toolkit, Grid security certificates, reliable file transfer, simple job management, and workflow management. The tutorial introduces essential skills that will be needed to conduct and support use of the Grid computing environment. Hands-on exercises will be presented using a small dedicated grid hosted for this purpose at NCSA.Reflecting attendee feedback from the SC 2005 workshop, we will include more material on grid security and account management tools and revise and extend workflow management examples.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114571747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SC06, the premier international conference on high performance computing, networking and storage, will convene in November 2006 in Tampa, Florida. This year the conference will take its inspiration from Albert Einstein who said "Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination."Following the traditions set with the first SC conference in 1988, exciting technical and educational programs, workshops, tutorials, exhibits, demonstrations and many other activities await attendees. SC06 is the one place where attendees can see tomorrow's technology being used to solve world-class challenge problems today.With the conference growth over the past years, SC06 now has a total registered attendance in excess of 7,000. This attendance provides an excellent forum for researchers to explore ideas and build collaborations.The following are some of the SC06 highlights:SC06 provides a rigorous technical paper program with refereed papers on systems hardware and software, networking, storage, instruments, sensors, grids and web services along with novel applications of these technologies to problems of interest to science, engineering, business and society.SC06 also provides an engaging 2-day tutorials program that welcomes attendees to explore the practical aspects of a full spectrum of high performance computing, networking, storage and analysis topics. Tutorial attendees have the opportunity to learn about new topics and investigate familiar topics in-depth with other experts.Upgrades in systems, bandwidth and networking technologies over the last decade have resulted in dramatic increases in performance, scalability and overall computational power in high performance computing. More than ever before, organizations in commercial, government, university and research sectors are tasked with making sense of huge amounts of data.HPC Analytics will again highlight rigorous and sophisticated methods of data analysis and visualization used in high performance computing by showcasing powerful analytics applications solving complex, real-world problems. SC06 will explore the ways in which high performance computing, networking, storage and analysis lead to advances in research, education and commerce. Innovative and diverse technologies are implemented within the HPC world every year. SC06 will introduce an initiative focusing on those emerging concepts and technologies that have the potential to reshape the HPC landscape.The SC06 Education Program will continue the program begun in 2005 to bring K-16 teachers and faculty to the conference and provide them the tools and expertise to incorporate modeling and simulation into their classrooms.SC06 will be the foremost place to learn about the most important developments in High Performance Computing. On behalf of the organizing committee, we invite you to join us for a stimulating week in November 2006.
{"title":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","authors":"Barbara Horner-Miller","doi":"10.1145/1188455","DOIUrl":"https://doi.org/10.1145/1188455","url":null,"abstract":"SC06, the premier international conference on high performance computing, networking and storage, will convene in November 2006 in Tampa, Florida. This year the conference will take its inspiration from Albert Einstein who said \"Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination.\"Following the traditions set with the first SC conference in 1988, exciting technical and educational programs, workshops, tutorials, exhibits, demonstrations and many other activities await attendees. SC06 is the one place where attendees can see tomorrow's technology being used to solve world-class challenge problems today.With the conference growth over the past years, SC06 now has a total registered attendance in excess of 7,000. This attendance provides an excellent forum for researchers to explore ideas and build collaborations.The following are some of the SC06 highlights:SC06 provides a rigorous technical paper program with refereed papers on systems hardware and software, networking, storage, instruments, sensors, grids and web services along with novel applications of these technologies to problems of interest to science, engineering, business and society.SC06 also provides an engaging 2-day tutorials program that welcomes attendees to explore the practical aspects of a full spectrum of high performance computing, networking, storage and analysis topics. Tutorial attendees have the opportunity to learn about new topics and investigate familiar topics in-depth with other experts.Upgrades in systems, bandwidth and networking technologies over the last decade have resulted in dramatic increases in performance, scalability and overall computational power in high performance computing. More than ever before, organizations in commercial, government, university and research sectors are tasked with making sense of huge amounts of data.HPC Analytics will again highlight rigorous and sophisticated methods of data analysis and visualization used in high performance computing by showcasing powerful analytics applications solving complex, real-world problems. SC06 will explore the ways in which high performance computing, networking, storage and analysis lead to advances in research, education and commerce. Innovative and diverse technologies are implemented within the HPC world every year. SC06 will introduce an initiative focusing on those emerging concepts and technologies that have the potential to reshape the HPC landscape.The SC06 Education Program will continue the program begun in 2005 to bring K-16 teachers and faculty to the conference and provide them the tools and expertise to incorporate modeling and simulation into their classrooms.SC06 will be the foremost place to learn about the most important developments in High Performance Computing. On behalf of the organizing committee, we invite you to join us for a stimulating week in November 2006.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133724314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}