首页 > 最新文献

Proceedings of the 2006 ACM/IEEE conference on Supercomputing最新文献

英文 中文
PVFS: a parallel file system PVFS:并行文件系统
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188490
R. Ross, R. Latham
High-performance computers require a highly capable file system. PVFS is an open source parallel file system and joint collaboration led by Argonne National Laboratory, Clemson University, and Ohio Supercomputing Center. PVFS has been delivering features and high-performance on today's top high-end computers, while remaining easy to deploy on clusters of any size. Our previous BOF sessions have been focused on new developments. While we are still actively working on new features, this session will highlight the large and vibrant PVFS user community and a variety of their applications. Several notable users will share with attendees their experiences with PVFS. As always, PVFS developers will be present for an open forum discussion. Users, researchers, or the merely curious are all encouraged to attend.
高性能计算机需要一个功能强大的文件系统。PVFS是由Argonne国家实验室、Clemson大学和俄亥俄超级计算中心领导的一个开源并行文件系统。PVFS已经在当今的顶级高端计算机上提供了功能和高性能,同时仍然易于在任何规模的集群上部署。我们之前的BOF会议一直专注于新的发展。虽然我们仍在积极开发新功能,但本次会议将重点介绍庞大而充满活力的PVFS用户社区及其各种应用程序。几位知名用户将与与会者分享他们使用PVFS的经验。与往常一样,PVFS开发人员将出席一个开放的论坛讨论。用户、研究人员或仅仅是好奇的人都被鼓励参加。
{"title":"PVFS: a parallel file system","authors":"R. Ross, R. Latham","doi":"10.1145/1188455.1188490","DOIUrl":"https://doi.org/10.1145/1188455.1188490","url":null,"abstract":"High-performance computers require a highly capable file system. PVFS is an open source parallel file system and joint collaboration led by Argonne National Laboratory, Clemson University, and Ohio Supercomputing Center. PVFS has been delivering features and high-performance on today's top high-end computers, while remaining easy to deploy on clusters of any size. Our previous BOF sessions have been focused on new developments. While we are still actively working on new features, this session will highlight the large and vibrant PVFS user community and a variety of their applications. Several notable users will share with attendees their experiences with PVFS. As always, PVFS developers will be present for an open forum discussion. Users, researchers, or the merely curious are all encouraged to attend.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129431422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Addressing high performance and grid challenges: Intel and CERN 解决高性能和网格挑战:英特尔和欧洲核子研究中心
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188716
S. Wheat, Bob Jones
Intel will briefly introduce is new HPC products and upgrades, and future HPC plans and roadmaps and challenges while inviting director of the CERN EGEE Project to demonstrate successful relationship between Intel and this leading scientific organization.
英特尔将简要介绍其新的高性能计算产品和升级,以及未来的高性能计算计划、路线图和挑战,同时邀请欧洲核子研究中心EGEE项目主任展示英特尔与这个领先的科学组织之间的成功关系。
{"title":"Addressing high performance and grid challenges: Intel and CERN","authors":"S. Wheat, Bob Jones","doi":"10.1145/1188455.1188716","DOIUrl":"https://doi.org/10.1145/1188455.1188716","url":null,"abstract":"Intel will briefly introduce is new HPC products and upgrades, and future HPC plans and roadmaps and challenges while inviting director of the CERN EGEE Project to demonstrate successful relationship between Intel and this leading scientific organization.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127460958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WiFiber: new spectrum links for wireless gigabit transmission WiFiber:用于无线千兆传输的新频谱链路
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188752
J. Wells
This paper will discuss the current state of multi-gigabit wireless communications for very high bandwidth, short haul connectivity for education, research, and government deployments. It will address spectrum availability and suitability for multi-gigabit services, including the 1 to 10 Gbps GigaBeam has committed to and the 100 Gbps possibility the company is now exploring. The talk will include the technology evolution that has enabled these wireless data rates allowing fiber-like throughput particularly where it serves the difficulties presented by "last mile" fiber connectivity. There will be a focus on current and soon to be implemented applications and a look at models of how the university, military, government and related IT enterprises are using this technology today. Special attention will be paid to NAS, SAN, remote computing, grid computing and other similar applications.
本文将讨论用于教育、研究和政府部署的非常高带宽、短距离连接的多千兆无线通信的现状。它将解决多千兆服务的频谱可用性和适用性,包括千兆团队承诺的1到10gbps以及该公司目前正在探索的100gbps可能性。演讲将包括使这些无线数据速率能够实现类似光纤的吞吐量的技术发展,特别是在“最后一英里”光纤连接所带来的困难中。会议将重点关注当前和即将实现的应用程序,并探讨当今大学、军队、政府和相关IT企业如何使用该技术的模型。将特别关注NAS、SAN、远程计算、网格计算和其他类似应用。
{"title":"WiFiber: new spectrum links for wireless gigabit transmission","authors":"J. Wells","doi":"10.1145/1188455.1188752","DOIUrl":"https://doi.org/10.1145/1188455.1188752","url":null,"abstract":"This paper will discuss the current state of multi-gigabit wireless communications for very high bandwidth, short haul connectivity for education, research, and government deployments. It will address spectrum availability and suitability for multi-gigabit services, including the 1 to 10 Gbps GigaBeam has committed to and the 100 Gbps possibility the company is now exploring. The talk will include the technology evolution that has enabled these wireless data rates allowing fiber-like throughput particularly where it serves the difficulties presented by \"last mile\" fiber connectivity. There will be a focus on current and soon to be implemented applications and a look at models of how the university, military, government and related IT enterprises are using this technology today. Special attention will be paid to NAS, SAN, remote computing, grid computing and other similar applications.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127371350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HPC storage systems of 2020 2020年的高性能计算存储系统
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188765
Garth A. Gibson, M. Kryder, R. Freitas
Experts in advanced system technologies will predict the design of the best HPC Storage Systems in 2020. They will defend why they think the technology they select will be the winning technology 15 years from now. The panelists will pick one set of technology - not a list of possibilities - to define the system. They will define the performance and aspects of the technologies and explain why their system is the most likely to succeed.Besides questions and comments at the sessions, attendees will vote for the proposed systems of 2020 they think are most likely to succeed. The presentations, votes and attendee comments will all be sealed in a time capsule that will be opened in 2020, which will be used in 2020 to compare the predications to reality. The time capsule will also include an appropriate prize for the presenter who made the best prediction. The winner needs to be present to collect their prize.
先进系统技术专家将预测2020年最好的高性能计算存储系统的设计。他们会为为什么他们认为他们选择的技术将是15年后的胜利技术辩护。小组成员将选择一套技术——而不是一系列可能性——来定义这个系统。他们将定义技术的性能和方面,并解释为什么他们的系统最有可能成功。除了在会议上提出问题和评论外,与会者还将投票选出他们认为最有可能成功的2020年拟议制度。演讲、投票和与会者的评论都将被密封在一个时间胶囊中,这个时间胶囊将在2020年打开,并在2020年用于将预测与现实进行比较。时间胶囊还将为做出最佳预测的主持人提供适当的奖励。获奖者必须到场领奖。
{"title":"HPC storage systems of 2020","authors":"Garth A. Gibson, M. Kryder, R. Freitas","doi":"10.1145/1188455.1188765","DOIUrl":"https://doi.org/10.1145/1188455.1188765","url":null,"abstract":"Experts in advanced system technologies will predict the design of the best HPC Storage Systems in 2020. They will defend why they think the technology they select will be the winning technology 15 years from now. The panelists will pick one set of technology - not a list of possibilities - to define the system. They will define the performance and aspects of the technologies and explain why their system is the most likely to succeed.Besides questions and comments at the sessions, attendees will vote for the proposed systems of 2020 they think are most likely to succeed. The presentations, votes and attendee comments will all be sealed in a time capsule that will be opened in 2020, which will be used in 2020 to compare the predications to reality. The time capsule will also include an appropriate prize for the presenter who made the best prediction. The winner needs to be present to collect their prize.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128888719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Sherpa 数字夏尔巴人
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188617
R. C. Price, V. Bazterra, Wayne B. Bradford, J. Facelli
Currently users of high performance computers are overwhelmed with non-scalable tasks such as job submission and monitoring. Many users are limited by the number of jobs they can submit to one High Performance Computing (HPC) resource at a time, which results in very long queue times. Digital Sherpa is a grid application for executing jobs on many separate HPC resources at a time, which can reduce total queue time. It automates non-scalable tasks such as job submission and monitoring, and includes recovery features such as resubmission of failed jobs. Digital Sherpa has been implemented for MGAC, a parallel distributed application for the prediction of atomic clusters and crystal structures using Genetic Algorithms. Success has been found using Digital Sherpa in a prototype of an HPC oriented combustion simulation application as well as on the TeraGrid. The high level goal is to allow Digital Sherpa to interoperate with any HPC application.
目前,高性能计算机的用户被诸如作业提交和监控等不可扩展的任务所淹没。许多用户一次只能向一个高性能计算(High Performance Computing, HPC)资源提交的作业数量有限,这导致排队时间很长。Digital Sherpa是一个网格应用程序,用于一次在许多独立的HPC资源上执行作业,这可以减少总队列时间。它可以自动执行不可伸缩的任务,如作业提交和监视,并包括恢复功能,如重新提交失败的作业。MGAC是一个使用遗传算法预测原子团簇和晶体结构的并行分布式应用程序。在一个面向高性能计算的燃烧模拟应用的原型以及TeraGrid上,使用Digital Sherpa已经取得了成功。高级目标是允许Digital Sherpa与任何HPC应用程序进行互操作。
{"title":"Digital Sherpa","authors":"R. C. Price, V. Bazterra, Wayne B. Bradford, J. Facelli","doi":"10.1145/1188455.1188617","DOIUrl":"https://doi.org/10.1145/1188455.1188617","url":null,"abstract":"Currently users of high performance computers are overwhelmed with non-scalable tasks such as job submission and monitoring. Many users are limited by the number of jobs they can submit to one High Performance Computing (HPC) resource at a time, which results in very long queue times. Digital Sherpa is a grid application for executing jobs on many separate HPC resources at a time, which can reduce total queue time. It automates non-scalable tasks such as job submission and monitoring, and includes recovery features such as resubmission of failed jobs. Digital Sherpa has been implemented for MGAC, a parallel distributed application for the prediction of atomic clusters and crystal structures using Genetic Algorithms. Success has been found using Digital Sherpa in a prototype of an HPC oriented combustion simulation application as well as on the TeraGrid. The high level goal is to allow Digital Sherpa to interoperate with any HPC application.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130727503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the importance of high availability MPIs 探讨高可用性mpi的重要性
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188496
Hakon O. Bugge
Pharmaceutical research. Weather prediction. Oil exploration. The data analysis demanded for these jobs can be awesome. As more applications are running on Linux clusters, there are a number of applications where job completion is critical. Today's jobs are getting longer and it's not unusual to come across jobs with run times that last for multiple days. As the number of nodes in a cluster expands, the likelihood that a job will be able to complete without a hardware related failure becomes statistically relevant. For an application like this, the "cost" of having the job fail and having to restart the job is enormous. You need efficient ways to help drive jobs to completion or be able to recover from failures.This session will review the importance of high availability functionality in high performance computing MPIs when running communication-intensive applications. Different approaches for cooperative and distributed check-point-restart will also be explored.
医药研究。天气预报。石油勘探。这些工作所需要的数据分析是非常棒的。随着越来越多的应用程序在Linux集群上运行,有许多应用程序的作业完成非常关键。如今的作业越来越长,并且经常会遇到运行时间长达数天的作业。随着集群中节点数量的增加,作业能够在没有硬件相关故障的情况下完成的可能性在统计上变得相关。对于这样的应用程序,作业失败和必须重新启动作业的“成本”是巨大的。您需要有效的方法来帮助完成工作或能够从失败中恢复。本课程将回顾在运行通信密集型应用程序时,高性能计算mpi中高可用性功能的重要性。本文还将探讨协作式和分布式检查点重新启动的不同方法。
{"title":"Exploring the importance of high availability MPIs","authors":"Hakon O. Bugge","doi":"10.1145/1188455.1188496","DOIUrl":"https://doi.org/10.1145/1188455.1188496","url":null,"abstract":"Pharmaceutical research. Weather prediction. Oil exploration. The data analysis demanded for these jobs can be awesome. As more applications are running on Linux clusters, there are a number of applications where job completion is critical. Today's jobs are getting longer and it's not unusual to come across jobs with run times that last for multiple days. As the number of nodes in a cluster expands, the likelihood that a job will be able to complete without a hardware related failure becomes statistically relevant. For an application like this, the \"cost\" of having the job fail and having to restart the job is enormous. You need efficient ways to help drive jobs to completion or be able to recover from failures.This session will review the importance of high availability functionality in high performance computing MPIs when running communication-intensive applications. Different approaches for cooperative and distributed check-point-restart will also be explored.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132260326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sidney Fernbach award lecture: solving Einstein's equations through computational science 西德尼·芬巴赫奖讲座:通过计算科学解决爱因斯坦方程
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188659
E. Seidel
Einstein's equations of general relativity govern such exotic phenomena as black holes, neutron stars, and gravitational waves. Known for nearly a century, they are among the most complex in physics, and require very large scale computational power - which we are just on the verge of achieving - and advanced algorithms, to solve in the general case. I will motivate and describe the structure of these equations, and the worldwide effort to develop advanced and collaborative computational tools utilizing supercomputers, data archives, optical networks, grids, and advanced software to solve them in their full generality. I will focus on applications of these tools to extract new physics of relativistic astrophysical systems. In particular, I will summarize recent progress in the study of black hole collisions, considered to be promising sources of observable gravitational waves that may soon be seen for the first time by the worldwide network of gravitational wave detectors (LIGO, VIRGO, GEO, and others).
爱因斯坦的广义相对论方程支配着诸如黑洞、中子星和引力波等奇异现象。众所周知,它们是物理学中最复杂的问题之一,需要非常大规模的计算能力——我们刚刚接近实现这一目标——和先进的算法,才能在一般情况下解决。我将激励和描述这些方程的结构,以及世界范围内开发先进和协作计算工具的努力,利用超级计算机、数据档案、光网络、网格和先进的软件来全面解决它们。我将重点介绍这些工具的应用,以提取相对论天体物理系统的新物理。特别是,我将总结黑洞碰撞研究的最新进展,黑洞碰撞被认为是可观测引力波的有希望的来源,可能很快就会被全球引力波探测器网络(LIGO, VIRGO, GEO等)首次观测到。
{"title":"Sidney Fernbach award lecture: solving Einstein's equations through computational science","authors":"E. Seidel","doi":"10.1145/1188455.1188659","DOIUrl":"https://doi.org/10.1145/1188455.1188659","url":null,"abstract":"Einstein's equations of general relativity govern such exotic phenomena as black holes, neutron stars, and gravitational waves. Known for nearly a century, they are among the most complex in physics, and require very large scale computational power - which we are just on the verge of achieving - and advanced algorithms, to solve in the general case. I will motivate and describe the structure of these equations, and the worldwide effort to develop advanced and collaborative computational tools utilizing supercomputers, data archives, optical networks, grids, and advanced software to solve them in their full generality. I will focus on applications of these tools to extract new physics of relativistic astrophysical systems. In particular, I will summarize recent progress in the study of black hole collisions, considered to be promising sources of observable gravitational waves that may soon be seen for the first time by the worldwide network of gravitational wave detectors (LIGO, VIRGO, GEO, and others).","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130906942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Charm++ simplifies coding for the cell processor Charm++简化了单元处理程序的编码
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188596
David M. Kunzman, G. Zheng, Eric J. Bohm, James C. Phillips, L. Kalé
While the Cell processor, jointly developed by IBM, Sony, and Toshiba, has great computational power, it also presents many challenges including portability and ease of programming. We have been adapting the Charm++ Runtime System to utilize the Cell. We believe that the Charm++ model fits well with the Cell for many reasons: encapsulation of data, effective prefetching, the ability to peak ahead in message queues, virtualization, etc. To these ends, we have developed the Offload API (an independent code) which allows Charm++ applications to easily take advantage of the Cell. Our goal is to allow Charm++ programs to run on Cell-based and non-Cell-based platforms without modification to application code. Example Charm++ programs using the Offload API already exist. We have also begun modifying NAMD, a popular molecular dynamics code, to use the Cell. In this poster, we plan to present current progress and future plans for this work.
虽然由IBM、索尼和东芝联合开发的Cell处理器具有强大的计算能力,但它也提出了许多挑战,包括可移植性和编程的便利性。我们一直在调整Charm++运行时系统来利用Cell。我们相信,由于许多原因,Charm++模型非常适合Cell:数据封装、有效的预取、消息队列中的峰值提前处理能力、虚拟化等。为此,我们开发了卸载API(一个独立的代码),它允许Charm++应用程序轻松地利用Cell。我们的目标是允许Charm++程序在基于cell和非基于cell的平台上运行,而无需修改应用程序代码。使用Offload API的Charm++程序已经存在。我们也开始修改NAMD,一个流行的分子动力学代码,以使用Cell。在这张海报中,我们计划介绍这项工作的当前进展和未来计划。
{"title":"Charm++ simplifies coding for the cell processor","authors":"David M. Kunzman, G. Zheng, Eric J. Bohm, James C. Phillips, L. Kalé","doi":"10.1145/1188455.1188596","DOIUrl":"https://doi.org/10.1145/1188455.1188596","url":null,"abstract":"While the Cell processor, jointly developed by IBM, Sony, and Toshiba, has great computational power, it also presents many challenges including portability and ease of programming. We have been adapting the Charm++ Runtime System to utilize the Cell. We believe that the Charm++ model fits well with the Cell for many reasons: encapsulation of data, effective prefetching, the ability to peak ahead in message queues, virtualization, etc. To these ends, we have developed the Offload API (an independent code) which allows Charm++ applications to easily take advantage of the Cell. Our goal is to allow Charm++ programs to run on Cell-based and non-Cell-based platforms without modification to application code. Example Charm++ programs using the Offload API already exist. We have also begun modifying NAMD, a popular molecular dynamics code, to use the Cell. In this poster, we plan to present current progress and future plans for this work.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130937702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Altair's PBS professional update
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188484
M. Humphrey
This years BOF presentation will concentrate on the new features and functionality released during 2006 (PBS Professional Versions 7.1 and 8.0) with a glimpse into our development roadmap. The BOF provides PBS users with an oportunity to ask Altair personnel questions about the new features and also provide direction to Altair relative to our future roadmap.PBS Professional is the preferred workload management solution for HPC data centers with more than 1,400 sites deployed worldwide. PBS Professional is utilized in many vertical industries: Defense, Intelligence, Weather prediction, Automotive, Aerospace, life Sciences, Digital Content Creation, Oil/Gas exploration and Academic research.
今年的BOF演讲将重点介绍2006年发布的新特性和功能(PBS专业版7.1和8.0),并简要介绍我们的开发路线图。BOF为PBS用户提供了向Altair人员询问有关新功能的问题的机会,并为Altair提供了有关我们未来路线图的方向。PBS Professional是HPC数据中心的首选工作负载管理解决方案,在全球部署了1,400多个站点。PBS Professional应用于许多垂直行业:国防、情报、天气预报、汽车、航空航天、生命科学、数字内容创作、石油/天然气勘探和学术研究。
{"title":"Altair's PBS professional update","authors":"M. Humphrey","doi":"10.1145/1188455.1188484","DOIUrl":"https://doi.org/10.1145/1188455.1188484","url":null,"abstract":"This years BOF presentation will concentrate on the new features and functionality released during 2006 (PBS Professional Versions 7.1 and 8.0) with a glimpse into our development roadmap. The BOF provides PBS users with an oportunity to ask Altair personnel questions about the new features and also provide direction to Altair relative to our future roadmap.PBS Professional is the preferred workload management solution for HPC data centers with more than 1,400 sites deployed worldwide. PBS Professional is utilized in many vertical industries: Defense, Intelligence, Weather prediction, Automotive, Aerospace, life Sciences, Digital Content Creation, Oil/Gas exploration and Academic research.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130960638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Best practices in cluster management 集群管理的最佳实践
Pub Date : 2006-11-11 DOI: 10.1145/1188455.1188485
Rick Friedman
Linux clusters are receiving widespread attention in performance-driven environments like life sciences, manufacturing, oil and gas and financial services. With its growing popularity, more organizations are looking to learn how they can out-perform their competitors by implementing their own cluster solution. As such, the focus within the high performance computing market has shifted from whether an organization should implement a cluster management solution to how they should begin the process. If planned for and carried out effectively, clusters can help an organization increase their ROI, improve system performance and minimize business risk. Without proper strategy, an organization's cluster implementation process can result in slower time to resolution and increased costs - the very opposite of what clusters serve to achieve. This session will educate attendees on how to efficiently scale out Linux clusters, identify common implementation pitfalls and discover what steps their organization should take when executing Linux clusters.
Linux集群在生命科学、制造业、石油和天然气以及金融服务等性能驱动型环境中受到广泛关注。随着集群解决方案的日益普及,越来越多的组织希望了解如何通过实现自己的集群解决方案来超越竞争对手。因此,高性能计算市场的焦点已经从组织是否应该实现集群管理解决方案转移到他们应该如何开始这个过程。如果进行了有效的计划和执行,集群可以帮助组织提高ROI、改进系统性能并将业务风险降至最低。如果没有适当的策略,组织的集群实现过程可能会导致解决问题的时间变慢和成本增加——这与集群所要实现的目标完全相反。本次会议将教育与会者如何有效地扩展Linux集群,识别常见的实现缺陷,并发现他们的组织在执行Linux集群时应该采取哪些步骤。
{"title":"Best practices in cluster management","authors":"Rick Friedman","doi":"10.1145/1188455.1188485","DOIUrl":"https://doi.org/10.1145/1188455.1188485","url":null,"abstract":"Linux clusters are receiving widespread attention in performance-driven environments like life sciences, manufacturing, oil and gas and financial services. With its growing popularity, more organizations are looking to learn how they can out-perform their competitors by implementing their own cluster solution. As such, the focus within the high performance computing market has shifted from whether an organization should implement a cluster management solution to how they should begin the process. If planned for and carried out effectively, clusters can help an organization increase their ROI, improve system performance and minimize business risk. Without proper strategy, an organization's cluster implementation process can result in slower time to resolution and increased costs - the very opposite of what clusters serve to achieve. This session will educate attendees on how to efficiently scale out Linux clusters, identify common implementation pitfalls and discover what steps their organization should take when executing Linux clusters.","PeriodicalId":115940,"journal":{"name":"Proceedings of the 2006 ACM/IEEE conference on Supercomputing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127914540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2006 ACM/IEEE conference on Supercomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1