This talk will explore the influence of supercomputing technology on games and the influence of game technology on supercomputing.
本讲座将探讨超级计算技术对游戏的影响以及游戏技术对超级计算的影响。
{"title":"Real-time supercomputing and technology for games and entertainment","authors":"H. P. Hofstee","doi":"10.1145/1188455.1188662","DOIUrl":"https://doi.org/10.1145/1188455.1188662","url":null,"abstract":"This talk will explore the influence of supercomputing technology on games and the influence of game technology on supercomputing.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123464148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Oehmen, L. McCue, J. Adkins, K. Waters, Tim Carlson, W. Cannon, B. Webb-Robertson, Douglas J. Baxter, Elena S. Peterson, M. Singhal, A. Shah, Kyle R. Klicker
For the SC|06 analytics challenge, we demonstrate an end-to-end solution for processing data produced by high-throughput mass spectrometry (MS)-based proteomics so biological hypotheses can be explored. This approach is based on a tool called the Bioinformatics Resource Manager (BRM) which will interact with high-performance architecture and experimental data sources to provide high-throughput analytics to a specific experimental dataset. Peptide identification was achieved by a high-performance code, Polygraph, which has been shown to scale well beyond 1000 processors. Visual analytics applications such as PQuad, Cytoscape, or others may be used to visualize protein identities in the context of pathways using data from public repositories such as Kyoto Encyclopedia of Genes and Genomes (KEGG). The end result was that a user can go from experimental spectra to pathway data in a single workflow reducing time-to-solution for analyzing biological data from weeks to minutes.
{"title":"High-throughput visual analytics biological sciences: turning data into knowledge","authors":"C. Oehmen, L. McCue, J. Adkins, K. Waters, Tim Carlson, W. Cannon, B. Webb-Robertson, Douglas J. Baxter, Elena S. Peterson, M. Singhal, A. Shah, Kyle R. Klicker","doi":"10.1145/1188455.1188769","DOIUrl":"https://doi.org/10.1145/1188455.1188769","url":null,"abstract":"For the SC|06 analytics challenge, we demonstrate an end-to-end solution for processing data produced by high-throughput mass spectrometry (MS)-based proteomics so biological hypotheses can be explored. This approach is based on a tool called the Bioinformatics Resource Manager (BRM) which will interact with high-performance architecture and experimental data sources to provide high-throughput analytics to a specific experimental dataset. Peptide identification was achieved by a high-performance code, Polygraph, which has been shown to scale well beyond 1000 processors. Visual analytics applications such as PQuad, Cytoscape, or others may be used to visualize protein identities in the context of pathways using data from public repositories such as Kyoto Encyclopedia of Genes and Genomes (KEGG). The end result was that a user can go from experimental spectra to pathway data in a single workflow reducing time-to-solution for analyzing biological data from weeks to minutes.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121065944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Eclipse Parallel Tools Platform (PTP) is an Eclipse Foundation Technology Project (http://eclipse.org/ptp) that allows parallel tools to be integrated into the Eclipse environment.Eclipse offers many features you'd expect from a commercial quality IDE: a syntax-highlighting editor, incremental code compilation, a source-level debugger, support for source control systems such as CVS and Subversion, code refactoring, and support for multiple languages, including C, C++, and Fortran.PTP provides a highly integrated environment designed for parallel application development. It provides a portable open-source IDE capable of supporting a wide range of parallel architectures and runtime systems; a scalable parallel debugger; support for the integration of a wide range of parallel tools; and an environment that simplifies the end-user interaction with parallel systems.This tutorial aims to introduce participants to the Eclipse platform and provide hands-on experience in developing and debugging parallel applications using Eclipse and PTP with C, Fortran, and MPI.
{"title":"Application development using eclipse and the parallel tools platform","authors":"Greg Watson, C. Rasmussen, Beth Tibbitts","doi":"10.1145/1188455.1188668","DOIUrl":"https://doi.org/10.1145/1188455.1188668","url":null,"abstract":"The Eclipse Parallel Tools Platform (PTP) is an Eclipse Foundation Technology Project (http://eclipse.org/ptp) that allows parallel tools to be integrated into the Eclipse environment.Eclipse offers many features you'd expect from a commercial quality IDE: a syntax-highlighting editor, incremental code compilation, a source-level debugger, support for source control systems such as CVS and Subversion, code refactoring, and support for multiple languages, including C, C++, and Fortran.PTP provides a highly integrated environment designed for parallel application development. It provides a portable open-source IDE capable of supporting a wide range of parallel architectures and runtime systems; a scalable parallel debugger; support for the integration of a wide range of parallel tools; and an environment that simplifies the end-user interaction with parallel systems.This tutorial aims to introduce participants to the Eclipse platform and provide hands-on experience in developing and debugging parallel applications using Eclipse and PTP with C, Fortran, and MPI.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121212524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Luszczek, D. Bailey, J. Dongarra, J. Kepner, R. Lucas, R. Rabenseifner, D. Takahashi
In 2003, the DARPA's High Productivity Computing Systems released the HPCC suite. It examines the performance of HPC architectures using kernels with various memory access patterns of well known computational kernels. Consequently, HPCC results bound the performance of real applications as a function of memory access characteristics and define performance boundaries of HPC architectures. The suite was intended to augment the TOP500 list and by now the results are publicly available for 6 out of 10 of the world's fastest computers. Implementations exist in most of the major high-end programming languages and environments, accompanied by countless optimization efforts. The increased publicity enjoyed by HPCC doesn't necessarily translate into deeper understanding of the performance issues that HPCC benchmarks. And so this tutorial will introduce attendees to HPCC, provide tools to examine differences in HPC architectures, and give hands-on training that will hopefully lead to better understanding of parallel environments.
{"title":"The HPC Challenge (HPCC) benchmark suite","authors":"P. Luszczek, D. Bailey, J. Dongarra, J. Kepner, R. Lucas, R. Rabenseifner, D. Takahashi","doi":"10.1145/1188455.1188677","DOIUrl":"https://doi.org/10.1145/1188455.1188677","url":null,"abstract":"In 2003, the DARPA's High Productivity Computing Systems released the HPCC suite. It examines the performance of HPC architectures using kernels with various memory access patterns of well known computational kernels. Consequently, HPCC results bound the performance of real applications as a function of memory access characteristics and define performance boundaries of HPC architectures. The suite was intended to augment the TOP500 list and by now the results are publicly available for 6 out of 10 of the world's fastest computers. Implementations exist in most of the major high-end programming languages and environments, accompanied by countless optimization efforts. The increased publicity enjoyed by HPCC doesn't necessarily translate into deeper understanding of the performance issues that HPCC benchmarks. And so this tutorial will introduce attendees to HPCC, provide tools to examine differences in HPC architectures, and give hands-on training that will hopefully lead to better understanding of parallel environments.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125669509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This BOF is a venue for presentations and discussions of progress in the dual problems of evaluating the performance (including reliability) of petascale computing systems and of developing application codes that scale to run effectively on such systems. We therefore invite participation by representatives of large-system vendors, funding agencies, infrastructure operators, appplication-development groups, and others that have a stake in design, operation, and successful use of these systems.Technical topics to be addressed include: scaling properties of benchmark suites; scalable machine models; application modeling to predict scaling for future machines; the problem of preparing applications for petascale environments; and the role of all of these topics in petascale acquisitions.At this session we will start planning for a series of workshops on the subject.
{"title":"Evaluating petascale infrastructure systems: benchmarks, models, and applications","authors":"R. Fowler, A. Snavely, D. Reed","doi":"10.1145/1188455.1188487","DOIUrl":"https://doi.org/10.1145/1188455.1188487","url":null,"abstract":"This BOF is a venue for presentations and discussions of progress in the dual problems of evaluating the performance (including reliability) of petascale computing systems and of developing application codes that scale to run effectively on such systems. We therefore invite participation by representatives of large-system vendors, funding agencies, infrastructure operators, appplication-development groups, and others that have a stake in design, operation, and successful use of these systems.Technical topics to be addressed include: scaling properties of benchmark suites; scalable machine models; application modeling to predict scaling for future machines; the problem of preparing applications for petascale environments; and the role of all of these topics in petascale acquisitions.At this session we will start planning for a series of workshops on the subject.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115945357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In June 2006 Microsoft in conjunction with NCSA completed a Top 500 benchmark on a 900 processor Dell PowerEdge 1855 cluster running Windows CCS Version one. The result was a 4.1 Tflop Rmax number; placing this cluster at number 130 in the July 2006 Top 500 List. This was a significant accomplishment for an offering focused on a design point of 64 nodes.Attend this session to hear about our experiences compiling and running the HPC Linpack benchmark in a large-scale Windows environment with CCS. We will cover the design of a large Windows cluster, the tools used to provision the cluster, the tools used to compile the benchmark, the job submission process for the parallel execution of the program, and the actual results. We will also talk about what we learned from the process and how that information will lead to improvements for future versions of CCS.
2006年6月,微软与NCSA合作,在运行Windows CCS Version 1的900处理器戴尔PowerEdge 1855集群上完成了500强基准测试。结果是4.1 Tflop Rmax;在2006年7月的500强名单中,该集群排名第130位。对于专注于64个节点设计点的产品来说,这是一项重大成就。参加本次会议,了解我们在大规模Windows环境中使用CCS编译和运行HPC Linpack基准测试的经验。我们将介绍大型Windows集群的设计、用于配置集群的工具、用于编译基准测试的工具、用于并行执行程序的作业提交过程以及实际结果。我们还将讨论我们从这个过程中学到的东西,以及这些信息将如何对未来版本的CCS进行改进。
{"title":"Running a Top-500 benchmark on a windows compute cluster server cluster","authors":"Frank Chism, J. Enos","doi":"10.1145/1188455.1188747","DOIUrl":"https://doi.org/10.1145/1188455.1188747","url":null,"abstract":"In June 2006 Microsoft in conjunction with NCSA completed a Top 500 benchmark on a 900 processor Dell PowerEdge 1855 cluster running Windows CCS Version one. The result was a 4.1 Tflop Rmax number; placing this cluster at number 130 in the July 2006 Top 500 List. This was a significant accomplishment for an offering focused on a design point of 64 nodes.Attend this session to hear about our experiences compiling and running the HPC Linpack benchmark in a large-scale Windows environment with CCS. We will cover the design of a large Windows cluster, the tools used to provision the cluster, the tools used to compile the benchmark, the job submission process for the parallel execution of the program, and the actual results. We will also talk about what we learned from the process and how that information will lead to improvements for future versions of CCS.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134365470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High Performance Data Transfer is a core requirement of many Supercomputing applications. From basic FTP file transfers to P2P or Grid applications, moving data across wide area networks at high speed is critically important. This tutorial covers a wide range of approaches used to achieve this.The first half focuses on advanced networking technology and low-level performance issues such as delay, loss, switching/routing, TCP and UDP dynamics, system and network tuning. The second half looks at higher level approaches to improving performance, from improved protocols, parallel transfers, peer-to-peer and grid techniques, and abstract storage services.The attendee should come away with a detailed understanding of data transfer over wide area networks, and exposure to a great number of tools and utilities to tune, debug, and improve their ability to move data at high speed.
{"title":"High performance data transfer","authors":"P. Dykstra","doi":"10.1145/1188455.1188685","DOIUrl":"https://doi.org/10.1145/1188455.1188685","url":null,"abstract":"High Performance Data Transfer is a core requirement of many Supercomputing applications. From basic FTP file transfers to P2P or Grid applications, moving data across wide area networks at high speed is critically important. This tutorial covers a wide range of approaches used to achieve this.The first half focuses on advanced networking technology and low-level performance issues such as delay, loss, switching/routing, TCP and UDP dynamics, system and network tuning. The second half looks at higher level approaches to improving performance, from improved protocols, parallel transfers, peer-to-peer and grid techniques, and abstract storage services.The attendee should come away with a detailed understanding of data transfer over wide area networks, and exposure to a great number of tools and utilities to tune, debug, and improve their ability to move data at high speed.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131493497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Leslie S. Perkins, P. Andrews, D. Panda, Dave Morton, R. Bonica, N. H. Werstiuk, Randy Kreiser
Over the past decade, computing power has increased extraordinarily, enabling researchers to tackle large complex problems -- a great step forward in our ability to model the complex world. This increased capability resulted in massive amounts of data being produced and used by increasingly detailed simulations. It is the location, viewing, manipulation, storage, movement, sharing, and interpretation of this data -- now in hundreds of terabytes and growing -- that is causing major performance bottleneck. Data Intensive Computing has emerged as a topic to address the problem. The Advanced Virtual Engine Test Cell (AVETeC) Inc. and Computer Sciences Corporation (CSC) are under contract to establish, integrate and manage a test bed located at Department of Defense Major Shared Resource Center, Wright-Patterson Air Force Base, OH, at AVETeC in Springfield, OH collaborating with Department of Energy, and NASA Goddard, Greenbelt, MD. This test bed environment will actively evaluate emerging technologies to improve data management.
{"title":"Data intensive computing","authors":"Leslie S. Perkins, P. Andrews, D. Panda, Dave Morton, R. Bonica, N. H. Werstiuk, Randy Kreiser","doi":"10.1145/1188455.1188528","DOIUrl":"https://doi.org/10.1145/1188455.1188528","url":null,"abstract":"Over the past decade, computing power has increased extraordinarily, enabling researchers to tackle large complex problems -- a great step forward in our ability to model the complex world. This increased capability resulted in massive amounts of data being produced and used by increasingly detailed simulations. It is the location, viewing, manipulation, storage, movement, sharing, and interpretation of this data -- now in hundreds of terabytes and growing -- that is causing major performance bottleneck. Data Intensive Computing has emerged as a topic to address the problem. The Advanced Virtual Engine Test Cell (AVETeC) Inc. and Computer Sciences Corporation (CSC) are under contract to establish, integrate and manage a test bed located at Department of Defense Major Shared Resource Center, Wright-Patterson Air Force Base, OH, at AVETeC in Springfield, OH collaborating with Department of Energy, and NASA Goddard, Greenbelt, MD. This test bed environment will actively evaluate emerging technologies to improve data management.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131028332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taking the TOP500 as a base, the trend to an ever increasing number of processors is striking. The average crossed the 1,000 processor per system mark in 2005, the top being roughly two orders of magnitude larger. Yet, the majority of so-called "real world applications" still struggles to show decent performance improvements even on a few hundred processors.The objective of this BOF is to discuss and better understand the roadblocks to be removed to allow applications to scale to thousands of processors.
{"title":"Extreme application scalability","authors":"W. Oed","doi":"10.1145/1188455.1188478","DOIUrl":"https://doi.org/10.1145/1188455.1188478","url":null,"abstract":"Taking the TOP500 as a base, the trend to an ever increasing number of processors is striking. The average crossed the 1,000 processor per system mark in 2005, the top being roughly two orders of magnitude larger. Yet, the majority of so-called \"real world applications\" still struggles to show decent performance improvements even on a few hundred processors.The objective of this BOF is to discuss and better understand the roadblocks to be removed to allow applications to scale to thousands of processors.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115410516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several federal agencies have announced plans to acquire large scale systems within the next few years which will have peak performance in the petascale range. In order to make sure that end users can take maximum advantage of these systems, Federal agencies, cooperating through the NITRD Program are following a multi-faceted approach which: 1) provides access to leadership class platforms at DOD, DOE, NASA and NSF, 2) determines how to handle data on the petabyte scale through their File Systems and I/O activities, and 3) aids the development of petascale applications through support of multi-disciplinary teams as in DOE/SC SciDAC Program, NSF science application programs, and DOE/NNSA ASC Academic Alliance Program for example. In addition, the DARPA HPCS Program is focused on the development of next generation architectures that will substantially improve the productivity, useability and application scope of multi-petaflop systems. This BOF, presented by the HEC IWG will review the implementation, status and future plans for these and other activities.
{"title":"Approaching petascale computing","authors":"Grant Miller","doi":"10.1145/1188455.1188475","DOIUrl":"https://doi.org/10.1145/1188455.1188475","url":null,"abstract":"Several federal agencies have announced plans to acquire large scale systems within the next few years which will have peak performance in the petascale range. In order to make sure that end users can take maximum advantage of these systems, Federal agencies, cooperating through the NITRD Program are following a multi-faceted approach which: 1) provides access to leadership class platforms at DOD, DOE, NASA and NSF, 2) determines how to handle data on the petabyte scale through their File Systems and I/O activities, and 3) aids the development of petascale applications through support of multi-disciplinary teams as in DOE/SC SciDAC Program, NSF science application programs, and DOE/NNSA ASC Academic Alliance Program for example. In addition, the DARPA HPCS Program is focused on the development of next generation architectures that will substantially improve the productivity, useability and application scope of multi-petaflop systems. This BOF, presented by the HEC IWG will review the implementation, status and future plans for these and other activities.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122027400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}