首页 > 最新文献

Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)最新文献

英文 中文
CloudBridge: a Simple Cross-Cloud Python Library. CloudBridge:一个简单的跨云 Python 库。
Nuwan Goonasekera, Andrew Lonie, James Taylor, Enis Afgan

With clouds becoming a standard target for deploying applications, it is more important than ever to be able to seamlessly utilise resources and services from multiple providers. Proprietary vendor APIs make this challenging and lead to conditional code being written to accommodate various API differences, requiring application authors to deal with these complexities and to test their applications against each supported cloud. In this paper, we describe an open source Python library called CloudBridge that provides a simple, uniform, and extensible API for multiple clouds. The library defines a standard 'contract' that all supported providers must implement, and an extensive suite of conformance tests to ensure that any exposed behavior is uniform across cloud providers, thus allowing applications to confidently utilise any of the supported clouds without any cloud-specific code or testing.

随着云成为部署应用程序的标准目标,能够无缝利用多个提供商的资源和服务比以往任何时候都更加重要。专有的供应商应用程序接口(API)使这一工作具有挑战性,并导致编写有条件的代码以适应各种不同的应用程序接口(API),这就要求应用程序作者处理这些复杂问题,并针对每个支持的云测试其应用程序。在本文中,我们介绍了一个名为 CloudBridge 的开源 Python 库,它为多个云提供了一个简单、统一和可扩展的 API。该库定义了所有受支持的提供商都必须实施的标准 "契约",并定义了一套广泛的一致性测试,以确保所有暴露的行为在不同的云提供商之间是统一的,从而使应用程序能够放心地使用任何受支持的云,而无需任何特定于云的代码或测试。
{"title":"CloudBridge: a Simple Cross-Cloud Python Library.","authors":"Nuwan Goonasekera, Andrew Lonie, James Taylor, Enis Afgan","doi":"10.1145/2949550.2949648","DOIUrl":"10.1145/2949550.2949648","url":null,"abstract":"<p><p>With clouds becoming a standard target for deploying applications, it is more important than ever to be able to seamlessly utilise resources and services from multiple providers. Proprietary vendor APIs make this challenging and lead to conditional code being written to accommodate various API differences, requiring application authors to deal with these complexities and to test their applications against each supported cloud. In this paper, we describe an open source Python library called CloudBridge that provides a simple, uniform, and extensible API for multiple clouds. The library defines a standard 'contract' that all supported providers must implement, and an extensive suite of conformance tests to ensure that any exposed behavior is uniform across cloud providers, thus allowing applications to confidently utilise any of the supported clouds without any cloud-specific code or testing.</p>","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8375622/pdf/nihms-1689928.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39349009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Automating XSEDE User Ticket Classification 自动化XSEDE用户票证分类
Gwang Son, Victor Hazlewood, G. D. Peterson
The XSEDE ticket system, which is a help desk ticketing system, receives email and web-based problem reports (i.e., tickets) from users and these tickets can be manually grouped into predefined categories either by the ticket submitter or by operations staff. This manual process can be automated by using text classification algorithms such as Multinomial Naive Bayes (MNB) or Softmax Regression Neural Network (SNN). Ticket subjects, rather than whole tickets, were used to make an input word list along with a manual word group list to enhance accuracy. The text mining algorithms used the input word list to select input words in the tickets. Compared with the Matlab svm() function, MNB and SNN showed overall better accuracy (up to ~85.8% using two simultaneous category selection). Also, the service provider resource (i.e., system name) information could be extracted from the tickets with ~90% accuracy.
XSEDE票务系统是一个帮助台票务系统,接收来自用户的电子邮件和基于web的问题报告(即票务),这些票务可以由票务提交者或操作人员手动分组到预定义的类别中。这个手动过程可以通过使用文本分类算法(如多项朴素贝叶斯(MNB)或Softmax回归神经网络(SNN))自动完成。为了提高准确性,我们使用票证主题,而不是整个票证,来制作一个输入词列表和一个手动词组列表。文本挖掘算法使用输入词列表来选择门票中的输入词。与Matlab支持向量机()函数相比,MNB和SNN总体上具有更好的准确率(在两个同时选择类别的情况下,准确率高达85.8%)。此外,服务提供者资源(即系统名称)信息可以以90%的准确率从票证中提取出来。
{"title":"On Automating XSEDE User Ticket Classification","authors":"Gwang Son, Victor Hazlewood, G. D. Peterson","doi":"10.1145/2616498.2616549","DOIUrl":"https://doi.org/10.1145/2616498.2616549","url":null,"abstract":"The XSEDE ticket system, which is a help desk ticketing system, receives email and web-based problem reports (i.e., tickets) from users and these tickets can be manually grouped into predefined categories either by the ticket submitter or by operations staff. This manual process can be automated by using text classification algorithms such as Multinomial Naive Bayes (MNB) or Softmax Regression Neural Network (SNN). Ticket subjects, rather than whole tickets, were used to make an input word list along with a manual word group list to enhance accuracy. The text mining algorithms used the input word list to select input words in the tickets. Compared with the Matlab svm() function, MNB and SNN showed overall better accuracy (up to ~85.8% using two simultaneous category selection). Also, the service provider resource (i.e., system name) information could be extracted from the tickets with ~90% accuracy.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"65 1","pages":"41:1-41:7"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74102510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Evaluating Distributed Platforms for Protein-Guided Scientific Workflow 评估分布式平台的蛋白质指导的科学工作流程
Natasha Pavlovikj, Kevin Begcy, S. Behera, Malachy T. Campbell, H. Walia, J. Deogun
Complex and large-scale applications in different scientific disciplines are often represented as a set of independent tasks, known as workflows. Many scientific workflows have intensive resource requirements. Therefore, different distributed platforms, including campus clusters, grids and clouds are used for efficient execution of these workflows. In this paper we examine the performance and the cost of running the Pegasus Workflow Management System (Pegasus WMS) implementation of blast2cap3, the protein-guided assembly approach, on three different execution platforms: Sandhills, the University of Nebraska Campus Cluster, the academic grid Open Science Gird (OSG), and the commercial cloud Amazon EC2. Furthermore, the behavior of the blast2cap3 workflow was tested with different number of tasks. For the used workflows and execution platforms, we perform multiple runs in order to compare the total workflow running time, as well as the different resource availability over time. Additionally, for the most interesting runs, the number of running versus the number of idle jobs over time was analyzed for each platform. The performed experiments show that using the Pegasus WMS implementation of blast2cap3 with more than 100 tasks significantly reduces the running time for all execution platforms. In general, for our workflow, better performance and resource usage were achieved when Amazon EC2 was used as an execution platform. However, due to the Amazon EC2 cost, the academic distributed systems can sometimes be a good alternative and have excellent performance, especially when there are plenty of resources available.
在不同的科学学科中,复杂和大规模的应用通常被表示为一组独立的任务,称为工作流。许多科学工作流程都有密集的资源需求。因此,不同的分布式平台,包括校园集群、网格和云,被用于有效地执行这些工作流。在本文中,我们研究了在三个不同的执行平台上运行Pegasus工作流管理系统(Pegasus WMS)实现blast2cap3(蛋白质引导组装方法)的性能和成本:Sandhills,内布拉斯加州大学校园集群,学术网格开放科学网格(OSG)和商业云Amazon EC2。此外,用不同数量的任务测试了blast2cap3工作流的行为。对于使用的工作流和执行平台,我们执行多次运行,以便比较总工作流运行时间,以及随时间变化的不同资源可用性。此外,对于最有趣的运行,分析了每个平台的运行次数与空闲作业数量随时间的变化。实验表明,使用Pegasus WMS实现超过100个任务的blast2cap3可以显著减少所有执行平台的运行时间。一般来说,对于我们的工作流,当使用Amazon EC2作为执行平台时,可以实现更好的性能和资源使用。然而,由于Amazon EC2的成本,学术分布式系统有时可能是一个很好的替代方案,并且具有出色的性能,特别是在有大量可用资源的情况下。
{"title":"Evaluating Distributed Platforms for Protein-Guided Scientific Workflow","authors":"Natasha Pavlovikj, Kevin Begcy, S. Behera, Malachy T. Campbell, H. Walia, J. Deogun","doi":"10.1145/2616498.2616551","DOIUrl":"https://doi.org/10.1145/2616498.2616551","url":null,"abstract":"Complex and large-scale applications in different scientific disciplines are often represented as a set of independent tasks, known as workflows. Many scientific workflows have intensive resource requirements. Therefore, different distributed platforms, including campus clusters, grids and clouds are used for efficient execution of these workflows. In this paper we examine the performance and the cost of running the Pegasus Workflow Management System (Pegasus WMS) implementation of blast2cap3, the protein-guided assembly approach, on three different execution platforms: Sandhills, the University of Nebraska Campus Cluster, the academic grid Open Science Gird (OSG), and the commercial cloud Amazon EC2. Furthermore, the behavior of the blast2cap3 workflow was tested with different number of tasks. For the used workflows and execution platforms, we perform multiple runs in order to compare the total workflow running time, as well as the different resource availability over time. Additionally, for the most interesting runs, the number of running versus the number of idle jobs over time was analyzed for each platform. The performed experiments show that using the Pegasus WMS implementation of blast2cap3 with more than 100 tasks significantly reduces the running time for all execution platforms. In general, for our workflow, better performance and resource usage were achieved when Amazon EC2 was used as an execution platform. However, due to the Amazon EC2 cost, the academic distributed systems can sometimes be a good alternative and have excellent performance, especially when there are plenty of resources available.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"16 1","pages":"38:1-38:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75261615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in particle tracking in turbulence on a massive scale 大规模湍流中粒子追踪的挑战
D. Buaria, P. Yeung
An important but somewhat under-investigated issue in turbulence as a challenge in high-performance computing is the problem of interpolating, from a set of grid points, the velocity of many millions of fluid particles that wander in the flow field, which itself is divided into a larger number of sub-domains according to a chosen domain decomposition scheme. We present below the main elements of the algoithmic strategies that have led to reasonably good performance on two major Petascale computers, namely Stampede and Blue Waters. Performance data are presented at up to 16384 CPU cores for 64 million fluid particles.
作为高性能计算的挑战,湍流中的一个重要但有些未被研究的问题是,从一组网格点插值出在流场中游荡的数百万流体粒子的速度问题,流场本身根据选定的域分解方案划分为更多的子域。我们在下面介绍了在两台主要的千兆级计算机(即Stampede和Blue Waters)上取得相当不错性能的算法策略的主要元素。性能数据呈现在高达16384个CPU核心的6400万个流体颗粒。
{"title":"Challenges in particle tracking in turbulence on a massive scale","authors":"D. Buaria, P. Yeung","doi":"10.1145/2616498.2616526","DOIUrl":"https://doi.org/10.1145/2616498.2616526","url":null,"abstract":"An important but somewhat under-investigated issue in turbulence as a challenge in high-performance computing is the problem of interpolating, from a set of grid points, the velocity of many millions of fluid particles that wander in the flow field, which itself is divided into a larger number of sub-domains according to a chosen domain decomposition scheme. We present below the main elements of the algoithmic strategies that have led to reasonably good performance on two major Petascale computers, namely Stampede and Blue Waters. Performance data are presented at up to 16384 CPU cores for 64 million fluid particles.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"115 1","pages":"11:1-11:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77904802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Study of a Minimalistic Simulator on XSEDE Massively Parallel Systems XSEDE大规模并行系统的简约模拟器性能研究
Rong Rong, J. Hao, Jason Liu
Scalable Simulation Framework (SSF), a parallel simulation application programming interface (API) for large-scale discrete-event models, has been widely adopted in many areas. This paper presents a simplified and yet more streamlined implementation, called MiniSSF. MiniSSF maintains the core design concept of SSF, while removing some of the complex but rarely used features, for sake of efficiency. It also introduces several new features that can greatly simplify model development efforts and/or improve the simulator's performance. More specifically, an automated compiler-based source-code translation scheme has been adopted in MiniSSF to enable scalable process-oriented simulation using handcrafted threads. A hierarchical hybrid synchronization algorithm has been incorporated in the simulator to improve parallel performance. Also, a new set of platform-independent API functions have been added for developing simulation models to be executed transparently on different parallel computing platforms. In this paper, we report performance results from experiments on different XSEDE platforms to assess the performance and scalability of MiniSSF. It is shown that the simulator can achieve superior performance. The simulator can adapt its synchronization according to the model's computation and communication demands, as well as the underlying parallel platform. The results also suggest that more automatic adaptation and fine-grained performance tuning is necessary for handling more complex large-scale simulation scenarios.
可扩展仿真框架(SSF)是一种面向大规模离散事件模型的并行仿真应用程序编程接口(API),已被广泛应用于许多领域。本文提出了一种简化且更加精简的实现,称为MiniSSF。MiniSSF保留了SSF的核心设计理念,同时为了提高效率,删除了一些复杂但很少使用的功能。它还引入了几个新特性,可以极大地简化模型开发工作和/或提高模拟器的性能。更具体地说,MiniSSF中采用了基于编译器的自动源代码转换方案,以支持使用手工制作的线程进行可扩展的面向进程的模拟。为了提高并行性能,在仿真器中引入了分层混合同步算法。此外,还增加了一组新的平台无关API函数,用于开发在不同并行计算平台上透明执行的仿真模型。在本文中,我们报告了不同XSEDE平台上的性能实验结果,以评估MiniSSF的性能和可扩展性。仿真结果表明,该仿真器具有较好的性能。该仿真器可以根据模型的计算和通信需求以及底层并行平台来调整其同步。结果还表明,为了处理更复杂的大规模模拟场景,需要更多的自动适应和细粒度的性能调优。
{"title":"Performance Study of a Minimalistic Simulator on XSEDE Massively Parallel Systems","authors":"Rong Rong, J. Hao, Jason Liu","doi":"10.1145/2616498.2616512","DOIUrl":"https://doi.org/10.1145/2616498.2616512","url":null,"abstract":"Scalable Simulation Framework (SSF), a parallel simulation application programming interface (API) for large-scale discrete-event models, has been widely adopted in many areas. This paper presents a simplified and yet more streamlined implementation, called MiniSSF. MiniSSF maintains the core design concept of SSF, while removing some of the complex but rarely used features, for sake of efficiency. It also introduces several new features that can greatly simplify model development efforts and/or improve the simulator's performance. More specifically, an automated compiler-based source-code translation scheme has been adopted in MiniSSF to enable scalable process-oriented simulation using handcrafted threads. A hierarchical hybrid synchronization algorithm has been incorporated in the simulator to improve parallel performance. Also, a new set of platform-independent API functions have been added for developing simulation models to be executed transparently on different parallel computing platforms. In this paper, we report performance results from experiments on different XSEDE platforms to assess the performance and scalability of MiniSSF. It is shown that the simulator can achieve superior performance. The simulator can adapt its synchronization according to the model's computation and communication demands, as well as the underlying parallel platform. The results also suggest that more automatic adaptation and fine-grained performance tuning is necessary for handling more complex large-scale simulation scenarios.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"11 1","pages":"15:1-15:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85630399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Dynamically Provisioning Portable Gateway Infrastructure Using Docker and Agave 使用Docker和Agave动态配置可移植网关基础设施
R. Dooley, Joe Stubbs
The iPlant Agave Developer APIs are a Science-as-a-Service platform for developing modern science gateways. One trend we see emerging from our users is the aggregation of many different, distributed compute and storage systems. The rise in popularity in IaaS, PaaS, and container technologies has made the rapid deployment of elastic gateway infrastructure a reality. In this talk we will introduce Docker and the Agave Developer APIs then demonstrate how to use them to provision applications and infrastructure that are portable across any Linux hosting environment. We will conclude by using our lightweight gateway technology, GatewayDNA, to run an application and move data across multiple systems simultaneously.
iPlant龙舌兰开发人员api是一个用于开发现代科学网关的科学即服务平台。我们从用户那里看到的一个趋势是许多不同的分布式计算和存储系统的聚合。IaaS、PaaS和容器技术的流行使得弹性网关基础设施的快速部署成为现实。在这次演讲中,我们将介绍Docker和Agave Developer api,然后演示如何使用它们来提供可在任何Linux托管环境中移植的应用程序和基础设施。最后,我们将使用我们的轻量级网关技术GatewayDNA来运行应用程序并同时跨多个系统移动数据。
{"title":"Dynamically Provisioning Portable Gateway Infrastructure Using Docker and Agave","authors":"R. Dooley, Joe Stubbs","doi":"10.1145/2616498.2616561","DOIUrl":"https://doi.org/10.1145/2616498.2616561","url":null,"abstract":"The iPlant Agave Developer APIs are a Science-as-a-Service platform for developing modern science gateways. One trend we see emerging from our users is the aggregation of many different, distributed compute and storage systems. The rise in popularity in IaaS, PaaS, and container technologies has made the rapid deployment of elastic gateway infrastructure a reality. In this talk we will introduce Docker and the Agave Developer APIs then demonstrate how to use them to provision applications and infrastructure that are portable across any Linux hosting environment. We will conclude by using our lightweight gateway technology, GatewayDNA, to run an application and move data across multiple systems simultaneously.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"28 6 1","pages":"55:1-55:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82787939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Accelerating TauDEM as a Scalable Hydrological Terrain Analysis Service on XSEDE 加速TauDEM作为XSEDE上可扩展的水文地形分析服务
Ye Fan, Yan Y. Liu, Shaowen Wang, D. Tarboton, Ahmet Artu Yildirim, Nancy Wilkins-Diehr
In this paper, we present the experience of scaling a parallel hydrological analysis software - TauDEM - to thousands of processors and large elevation datasets through XSEDE ECSS effort and multi-institutional collaboration.
在本文中,我们介绍了通过XSEDE ECSS的努力和多机构合作,将并行水文分析软件TauDEM扩展到数千个处理器和大型海拔数据集的经验。
{"title":"Accelerating TauDEM as a Scalable Hydrological Terrain Analysis Service on XSEDE","authors":"Ye Fan, Yan Y. Liu, Shaowen Wang, D. Tarboton, Ahmet Artu Yildirim, Nancy Wilkins-Diehr","doi":"10.1145/2616498.2616510","DOIUrl":"https://doi.org/10.1145/2616498.2616510","url":null,"abstract":"In this paper, we present the experience of scaling a parallel hydrological analysis software - TauDEM - to thousands of processors and large elevation datasets through XSEDE ECSS effort and multi-institutional collaboration.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"2016 1","pages":"5:1-5:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86681153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science 探索之门:科学长尾的网络基础设施
R. Moore, C. Baru, Diane A. Baxter, Geoffrey Fox, A. Majumdar, P. Papadopoulos, W. Pfeiffer, R. Sinkovits, Shawn M. Strande, M. Tatineni, R. Wagner, Nancy Wilkins-Diehr, M. Norman
NSF-funded computing centers have primarily focused on delivering high-performance computing resources to academic researchers with the most computationally demanding applications. But now that computational science is so pervasive, there is a need for infrastructure that can serve more researchers and disciplines than just those at the peak of the HPC pyramid. Here we describe SDSC's Comet system, which is scheduled for production in January 2015 and was designed to address the needs of a much larger and more expansive science community-- the "long tail of science". Comet will have a peak performance of 2 petaflop/s, mostly delivered using Intel's next generation Xeon processor. It will include some large-memory and GPU-accelerated nodes, node-local flash memory, 7 PB of Performance Storage, and 6 PB of Durable Storage. These features, together with the availability of high performance virtualization, will enable users to run complex, heterogeneous workloads on a single integrated resource.
nsf资助的计算中心主要致力于为学术研究人员提供高性能的计算资源,以满足最苛刻的计算要求。但是现在,计算科学是如此普及,有必要为更多的研究人员和学科提供基础设施,而不仅仅是那些在HPC金字塔顶端的人。在这里,我们描述了SDSC的彗星系统,该系统计划于2015年1月投入生产,旨在满足更大、更广泛的科学社区的需求——“科学的长尾”。Comet的峰值性能为每秒2千万亿次,主要使用英特尔的下一代至强处理器。它将包括一些大内存和gpu加速的节点、节点本地闪存、7pb的性能存储和6pb的持久存储。这些特性,加上高性能虚拟化的可用性,将使用户能够在单个集成资源上运行复杂的异构工作负载。
{"title":"Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science","authors":"R. Moore, C. Baru, Diane A. Baxter, Geoffrey Fox, A. Majumdar, P. Papadopoulos, W. Pfeiffer, R. Sinkovits, Shawn M. Strande, M. Tatineni, R. Wagner, Nancy Wilkins-Diehr, M. Norman","doi":"10.1145/2616498.2616540","DOIUrl":"https://doi.org/10.1145/2616498.2616540","url":null,"abstract":"NSF-funded computing centers have primarily focused on delivering high-performance computing resources to academic researchers with the most computationally demanding applications. But now that computational science is so pervasive, there is a need for infrastructure that can serve more researchers and disciplines than just those at the peak of the HPC pyramid. Here we describe SDSC's Comet system, which is scheduled for production in January 2015 and was designed to address the needs of a much larger and more expansive science community-- the \"long tail of science\". Comet will have a peak performance of 2 petaflop/s, mostly delivered using Intel's next generation Xeon processor. It will include some large-memory and GPU-accelerated nodes, node-local flash memory, 7 PB of Performance Storage, and 6 PB of Durable Storage. These features, together with the availability of high performance virtualization, will enable users to run complex, heterogeneous workloads on a single integrated resource.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"1 1","pages":"39:1-39:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89852771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
A leap forward with UTK's Cray XC30 UTK的Cray XC30是一个飞跃
M. Fahey
This paper shows a significant productivity leap for several science groups and the accomplishments they have made to date on Darter - a Cray XC30 at the University of Tennessee Knoxville. The increased productivity is due to faster processors and interconnect combined in a new generation from Cray, and yet it still has a very similar programming environment as compared to previous generations of Cray machines that makes porting easy.
这篇论文展示了几个科学小组在达特上的重大生产力飞跃,以及他们迄今为止在田纳西大学诺克斯维尔分校的克雷XC30上取得的成就。生产率的提高是由于新一代Cray中更快的处理器和互连相结合,但与前几代Cray机器相比,它仍然具有非常相似的编程环境,这使得移植变得容易。
{"title":"A leap forward with UTK's Cray XC30","authors":"M. Fahey","doi":"10.1145/2616498.2616546","DOIUrl":"https://doi.org/10.1145/2616498.2616546","url":null,"abstract":"This paper shows a significant productivity leap for several science groups and the accomplishments they have made to date on Darter - a Cray XC30 at the University of Tennessee Knoxville. The increased productivity is due to faster processors and interconnect combined in a new generation from Cray, and yet it still has a very similar programming environment as compared to previous generations of Cray machines that makes porting easy.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"1 1","pages":"30:1-30:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78193394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Three-Semester, Interdisciplinary Approach to Parallel Programming in a Liberal Arts University Setting 文科大学三学期跨学科并行编程方法
Mike Morris, Karl Frinkle
We describe a successful addition of high performance computing (HPC) into a traditional computer science curriculum at a liberal arts university. The approach incorporated a three-semester sequence of courses emphasizing parallel programming techniques, with the final course focusing on a research-level mathematical project that was executed on a TOP500 supercomputer. A group of students with varied programming backgrounds participated in the program. Emphasis was placed on utilizing the Open MPI and CUDA libraries along with parallel algorithm and file I/O analysis.
我们描述了一个成功地将高性能计算(HPC)添加到一所文理大学的传统计算机科学课程中的案例。该方法包括三个学期的课程序列,强调并行编程技术,最后的课程侧重于在TOP500超级计算机上执行的研究级数学项目。一群有着不同编程背景的学生参加了这个项目。重点是利用Open MPI和CUDA库以及并行算法和文件I/O分析。
{"title":"A Three-Semester, Interdisciplinary Approach to Parallel Programming in a Liberal Arts University Setting","authors":"Mike Morris, Karl Frinkle","doi":"10.1145/2616498.2616567","DOIUrl":"https://doi.org/10.1145/2616498.2616567","url":null,"abstract":"We describe a successful addition of high performance computing (HPC) into a traditional computer science curriculum at a liberal arts university. The approach incorporated a three-semester sequence of courses emphasizing parallel programming techniques, with the final course focusing on a research-level mathematical project that was executed on a TOP500 supercomputer. A group of students with varied programming backgrounds participated in the program. Emphasis was placed on utilizing the Open MPI and CUDA libraries along with parallel algorithm and file I/O analysis.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"30 1","pages":"66:1-66:7"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75046159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1