Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)最新文献
Jeremy Fischer, R. Knepper, M. Standish, C. Stewart, Resa Alvord, D. Lifka, B. Hallock, Victor Hazlewood
The Extreme Science and Engineering Discovery Environment has created a suite of software that is collectively known as the basic XSEDE-compatible cluster build. It has been distributed as a Rocks roll for some time. It is now available as individual RPM packages, so that it can be downloaded and installed in portions as appropriate on existing and working clusters. In this paper, we explain the concept of the XSEDE-compatible cluster and explain how to install individual components as RPMs through use of Puppet and the XSEDE compatible cluster YUM repository.
{"title":"Methods For Creating XSEDE Compatible Clusters","authors":"Jeremy Fischer, R. Knepper, M. Standish, C. Stewart, Resa Alvord, D. Lifka, B. Hallock, Victor Hazlewood","doi":"10.1145/2616498.2616578","DOIUrl":"https://doi.org/10.1145/2616498.2616578","url":null,"abstract":"The Extreme Science and Engineering Discovery Environment has created a suite of software that is collectively known as the basic XSEDE-compatible cluster build. It has been distributed as a Rocks roll for some time. It is now available as individual RPM packages, so that it can be downloaded and installed in portions as appropriate on existing and working clusters. In this paper, we explain the concept of the XSEDE-compatible cluster and explain how to install individual components as RPMs through use of Puppet and the XSEDE compatible cluster YUM repository.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"55 1","pages":"74:1-74:5"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77777223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Lin, Y. Aoki, T. Blum, T. Izubuchi, C. Jung, S. Ohta, E. Shintani, T. Yamazaki, S. Sasaki
The composition of nucleons has long been known to be sub-atomic particles called quarks and gluons, which interact through the strong force and theoretically can be described by Quantum Chromodynamics (QCD). Lattice QCD (LQCD), in which the continuous space-time is translated into grid points on a four-dimensional lattice and ab initio Monte Carlo simulations are performed, is by far the only model-independent method to study QCD with controllable errors. We report the successful application of a novel algorithm, All-Mode-Averaging, in the LQCD calculations of nucleon internal structure on the Gordon supercomputer our award of roughly 6 million service units through XSEDE. The application of AMA resulted in as much as a factor of 30 speedup in computational efficiency.
{"title":"Accelerating Ab Initio Nucleon Structure Calculations with All-Mode-Averaging on Gordon","authors":"M. Lin, Y. Aoki, T. Blum, T. Izubuchi, C. Jung, S. Ohta, E. Shintani, T. Yamazaki, S. Sasaki","doi":"10.1145/2616498.2616516","DOIUrl":"https://doi.org/10.1145/2616498.2616516","url":null,"abstract":"The composition of nucleons has long been known to be sub-atomic particles called quarks and gluons, which interact through the strong force and theoretically can be described by Quantum Chromodynamics (QCD). Lattice QCD (LQCD), in which the continuous space-time is translated into grid points on a four-dimensional lattice and ab initio Monte Carlo simulations are performed, is by far the only model-independent method to study QCD with controllable errors. We report the successful application of a novel algorithm, All-Mode-Averaging, in the LQCD calculations of nucleon internal structure on the Gordon supercomputer our award of roughly 6 million service units through XSEDE. The application of AMA resulted in as much as a factor of 30 speedup in computational efficiency.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"46 1","pages":"3:1-3:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81448307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe our experiences teaching CPSC 109 - Introduction to Modeling and Simulation, an introductory course that we developed [1]. The course fills one of two quantitative reasoning requirements for the general education program at the University of Mary Washington (UMW) and serves as one of two possible prerequisites for UMW's Computer Science 1 course [2]. It is also intended to serve as a bridge between computer science and other disciplines at UMW, particularly those in the natural and physical sciences. The course is based on the National Computational Science Institute (NCSI) [3] Introduction to Computational Thinking Workshop, but adds in an explicit emphasis on introductory programming concepts for several weeks to ensure adequate preparation for Computer Science 1. We discuss the tools the students use in the course, some assignments, and the projects that students have created at the end of the semester. In addition, we discuss a special version of the course we have created for students in UMW's honors program [4].
我们描述了我们教授CPSC 109 -建模与仿真入门课程的经验,这是我们开发的入门课程[1]。该课程满足玛丽华盛顿大学(University of Mary Washington, UMW)通识教育项目的两个定量推理要求之一,也是UMW计算机科学1课程的两个可能先决条件之一[2]。它还旨在成为UMW计算机科学与其他学科之间的桥梁,特别是自然科学和物理科学。该课程以国家计算科学研究所(NCSI)[3]计算思维入门研讨会为基础,但明确地强调了几个星期的入门编程概念,以确保为计算机科学1做好充分的准备。我们讨论学生在课程中使用的工具,一些作业,以及学生在学期结束时创建的项目。此外,我们还讨论了我们为UMW荣誉项目的学生创建的课程的一个特殊版本[4]。
{"title":"An Introductory Course on Modeling and Simulation","authors":"David M. Toth, J. Solka","doi":"10.1145/2616498.2616572","DOIUrl":"https://doi.org/10.1145/2616498.2616572","url":null,"abstract":"We describe our experiences teaching CPSC 109 - Introduction to Modeling and Simulation, an introductory course that we developed [1]. The course fills one of two quantitative reasoning requirements for the general education program at the University of Mary Washington (UMW) and serves as one of two possible prerequisites for UMW's Computer Science 1 course [2]. It is also intended to serve as a bridge between computer science and other disciplines at UMW, particularly those in the natural and physical sciences. The course is based on the National Computational Science Institute (NCSI) [3] Introduction to Computational Thinking Workshop, but adds in an explicit emphasis on introductory programming concepts for several weeks to ensure adequate preparation for Computer Science 1. We discuss the tools the students use in the course, some assignments, and the projects that students have created at the end of the semester. In addition, we discuss a special version of the course we have created for students in UMW's honors program [4].","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"215 1","pages":"67:1"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89188694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Merritt, N. Farooqui, M. Slawinska, Ada Gavrilovska, K. Schwan, Vishakha Gupta
High-end computing systems are becoming increasingly heterogeneous, with nodes comprised of multiple CPUs and accelerators, like GPGPUs, and with potential additional heterogeneity in memory configurations and network connectivities. Further, as we move to exascale systems, the view of their future use is one in which simulations co-run with online analytics or visualization methods, or where a high fidelity simulation may co-run with lower order methods and/or with programs performing uncertainty quantification. To explore and understand the challenges when multiple applications are mapped to heterogeneous machine resources, our research has developed methods that make it easy to construct 'virtual hardware platforms' comprised of sets of CPUs and GPGPUs custom-configured for applications when and as required. Specifically, the 'slicing' runtime presented in this paper manages for each application a set of resources, and at any one time, multiple such slices operate on shared underlying hardware. This paper describes the slicing abstraction and its ability to configure cluster hardware resources. It experiments with application scale-out, focusing on their computationally intensive GPGPU-based computations, and it evaluates cluster-level resource sharing across multiple slices on the Keeneland machine, an XSEDE resource.
{"title":"Slices: Provisioning Heterogeneous HPC Systems","authors":"A. Merritt, N. Farooqui, M. Slawinska, Ada Gavrilovska, K. Schwan, Vishakha Gupta","doi":"10.1145/2616498.2616531","DOIUrl":"https://doi.org/10.1145/2616498.2616531","url":null,"abstract":"High-end computing systems are becoming increasingly heterogeneous, with nodes comprised of multiple CPUs and accelerators, like GPGPUs, and with potential additional heterogeneity in memory configurations and network connectivities. Further, as we move to exascale systems, the view of their future use is one in which simulations co-run with online analytics or visualization methods, or where a high fidelity simulation may co-run with lower order methods and/or with programs performing uncertainty quantification. To explore and understand the challenges when multiple applications are mapped to heterogeneous machine resources, our research has developed methods that make it easy to construct 'virtual hardware platforms' comprised of sets of CPUs and GPGPUs custom-configured for applications when and as required. Specifically, the 'slicing' runtime presented in this paper manages for each application a set of resources, and at any one time, multiple such slices operate on shared underlying hardware. This paper describes the slicing abstraction and its ability to configure cluster hardware resources. It experiments with application scale-out, focusing on their computationally intensive GPGPU-based computations, and it evaluates cluster-level resource sharing across multiple slices on the Keeneland machine, an XSEDE resource.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"14 1 1","pages":"46:1-46:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89959303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Pogorelov, S. Borovikov, J. Heerikhuisen, Tae K. Kim, I. Kryukov, G. Zank
Flows of partially ionized plasma are frequently characterized by the presence of both thermal and nonthermal populations of ions. This occurs, e. g., in the outer heliosphere -- the part of interstellar space beyond the solar system whose properties are determined by the solar wind (SW) interaction with the local interstellar medium (LISM). Understanding the behavior of such flows requires us to investigate a variety of physical phenomena occurring throughout the solar system. These include charge exchange processes between neutral and charged particles, the birth of pick-up ions (PUIs), the origin of energetic neutral atoms (ENAs), SW turbulence, etc. Collisions between atoms and ions in the heliospheric plasma are so rare that they should be modeled kinetically. PUIs born when LISM neutral atoms charge-exchange with SW ions represent a hot, non-equilibrium component and also require a kinetic treatment. The behavior of PUIs at the SW termination shock (TS) is of major importance for the interpretation of the puzzling data from the Voyager 1 and 2 spacecraft, which are now the only in situ space mission intended to investigate the boundary of the solar system. We have recently proposed an explanation of the sky-spanning "ribbon" of unexpectedly intense emissions of ENAs detected by the Interstellar Boundary Explorer (IBEX) mission. Numerical solution of these problems with the realistic boundary conditions provided by remote and in situ observations of the SW properties, requires the application of adaptive mesh refinement (AMR) technologies and petascale supercomputers. Supported by the NSF ITR program and various NASA projects, we have implemented these in our Multi-Scale FLUid-Kinetic Simulation Suite, which is a collection of problem-oriented routines incorporated into the Chombo AMR framework. For the next 5--10 years, heliophysics research is faced with an extraordinary opportunity that cannot be soon repeated. This is to make in situ measurements of the SW from the Sun to the heliospheric boundaries and, at the same time, extract information about the global behavior of the evolving heliosphere through ENA observations by IBEX. In this paper, we describe the application of new possibilities provided within our Extreme Science and Engineering Discovery Environment (XSEDE) project to model challenging space physics and astrophysics problems. We used XSEDE supercomputers to analyze flows of magnetized, rarefied, partially-ionized plasma, where neutral atoms experience resonant charge exchange and collisions with ions. We modeled the SW flows in the inner and outer heliosphere and compared our results with in situ measurements performed by the ACE, IBEX, and Voyager spacecraft.
{"title":"MS-FLUKSS and Its Application to Modeling Flows of Partially Ionized Plasma in the Heliosphere","authors":"N. Pogorelov, S. Borovikov, J. Heerikhuisen, Tae K. Kim, I. Kryukov, G. Zank","doi":"10.1145/2616498.2616499","DOIUrl":"https://doi.org/10.1145/2616498.2616499","url":null,"abstract":"Flows of partially ionized plasma are frequently characterized by the presence of both thermal and nonthermal populations of ions. This occurs, e. g., in the outer heliosphere -- the part of interstellar space beyond the solar system whose properties are determined by the solar wind (SW) interaction with the local interstellar medium (LISM). Understanding the behavior of such flows requires us to investigate a variety of physical phenomena occurring throughout the solar system. These include charge exchange processes between neutral and charged particles, the birth of pick-up ions (PUIs), the origin of energetic neutral atoms (ENAs), SW turbulence, etc. Collisions between atoms and ions in the heliospheric plasma are so rare that they should be modeled kinetically. PUIs born when LISM neutral atoms charge-exchange with SW ions represent a hot, non-equilibrium component and also require a kinetic treatment. The behavior of PUIs at the SW termination shock (TS) is of major importance for the interpretation of the puzzling data from the Voyager 1 and 2 spacecraft, which are now the only in situ space mission intended to investigate the boundary of the solar system. We have recently proposed an explanation of the sky-spanning \"ribbon\" of unexpectedly intense emissions of ENAs detected by the Interstellar Boundary Explorer (IBEX) mission. Numerical solution of these problems with the realistic boundary conditions provided by remote and in situ observations of the SW properties, requires the application of adaptive mesh refinement (AMR) technologies and petascale supercomputers. Supported by the NSF ITR program and various NASA projects, we have implemented these in our Multi-Scale FLUid-Kinetic Simulation Suite, which is a collection of problem-oriented routines incorporated into the Chombo AMR framework. For the next 5--10 years, heliophysics research is faced with an extraordinary opportunity that cannot be soon repeated. This is to make in situ measurements of the SW from the Sun to the heliospheric boundaries and, at the same time, extract information about the global behavior of the evolving heliosphere through ENA observations by IBEX. In this paper, we describe the application of new possibilities provided within our Extreme Science and Engineering Discovery Environment (XSEDE) project to model challenging space physics and astrophysics problems. We used XSEDE supercomputers to analyze flows of magnetized, rarefied, partially-ionized plasma, where neutral atoms experience resonant charge exchange and collisions with ions. We modeled the SW flows in the inner and outer heliosphere and compared our results with in situ measurements performed by the ACE, IBEX, and Voyager spacecraft.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"30 1","pages":"22:1-22:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88059429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the number of cores per node increasing in modern clusters, an efficient implementation of intra-node communications is critical for application performance. MPI libraries generally use shared memory mechanisms for communication inside the node, unfortunately this approach has some limitations for large messages. The release of Linux kernel 3.2 introduced Cross Memory Attach (CMA) which is a mechanism to improve the communication between MPI processes inside the same node. But, as this feature is not enabled by default inside MPI libraries supporting it, it could be left disabled by HPC administrators which leads to a loss of performance benefits to users. In this paper, we explain how to use CMA and present an evaluation of CMA using micro-benchmarks and NAS parallel benchmarks (NPB) which are a set of applications commonly used to evaluate parallel systems. Our performance evaluation reveals that CMA outperforms shared memory performance for large messages. Micro-benchmark level evaluations show that CMA can enhance the performance by as much as a factor of four. With NPB, we see up to 24.75% improvement in total execution time for FT and up to 24.08% for IS.
{"title":"Benefits of Cross Memory Attach for MPI libraries on HPC Clusters","authors":"Jérôme Vienne","doi":"10.1145/2616498.2616532","DOIUrl":"https://doi.org/10.1145/2616498.2616532","url":null,"abstract":"With the number of cores per node increasing in modern clusters, an efficient implementation of intra-node communications is critical for application performance. MPI libraries generally use shared memory mechanisms for communication inside the node, unfortunately this approach has some limitations for large messages. The release of Linux kernel 3.2 introduced Cross Memory Attach (CMA) which is a mechanism to improve the communication between MPI processes inside the same node. But, as this feature is not enabled by default inside MPI libraries supporting it, it could be left disabled by HPC administrators which leads to a loss of performance benefits to users. In this paper, we explain how to use CMA and present an evaluation of CMA using micro-benchmarks and NAS parallel benchmarks (NPB) which are a set of applications commonly used to evaluate parallel systems.\u0000 Our performance evaluation reveals that CMA outperforms shared memory performance for large messages. Micro-benchmark level evaluations show that CMA can enhance the performance by as much as a factor of four. With NPB, we see up to 24.75% improvement in total execution time for FT and up to 24.08% for IS.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"41 1","pages":"33:1-33:6"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86481502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The time-dependent close-coupling method based on the Dirac equation is used to calculate single and double photoionization cross sections for Ne8+ in support of planned FLASH/DESY measurements. The fully correlated ground state radial wavefunction is obtained by solving a time independent inhomogeneous set of close-coupled equations. The repulsive interaction between electrons includes both Coulomb and Gaunt interactions. A Bessel function expression is used to include both dipole and quadruple effects on the radiation field interaction. Propagation of the time-dependent close-coupled equations yields single and double photoionization cross sections for Ne8+ in reasonably good agreement with distorted-wave and R-matrix results.
{"title":"Photoionization of Ne8+","authors":"M. Pindzola, S. Abdel-Naby, C. Ballance","doi":"10.1145/2616498.2616500","DOIUrl":"https://doi.org/10.1145/2616498.2616500","url":null,"abstract":"The time-dependent close-coupling method based on the Dirac equation is used to calculate single and double photoionization cross sections for Ne8+ in support of planned FLASH/DESY measurements. The fully correlated ground state radial wavefunction is obtained by solving a time independent inhomogeneous set of close-coupled equations. The repulsive interaction between electrons includes both Coulomb and Gaunt interactions. A Bessel function expression is used to include both dipole and quadruple effects on the radiation field interaction. Propagation of the time-dependent close-coupled equations yields single and double photoionization cross sections for Ne8+ in reasonably good agreement with distorted-wave and R-matrix results.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"185 1","pages":"23:1-23:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78048991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present here ongoing work to produce an implementation of Brownian dynamics simulation using a matrix-free method with hardware accelerators. This work describes the GPU acceleration of a smooth particle-mesh Ewald (SPME) algorithm which is used for the main part of the computation, previously ported to run on Intel Xeon Phi.
{"title":"Large-scale Hydrodynamic Brownian Simulations on Multicore and GPU Architectures","authors":"M. G. Lopez, Mitchel D. Horton, Edmond Chow","doi":"10.1145/2616498.2616523","DOIUrl":"https://doi.org/10.1145/2616498.2616523","url":null,"abstract":"We present here ongoing work to produce an implementation of Brownian dynamics simulation using a matrix-free method with hardware accelerators. This work describes the GPU acceleration of a smooth particle-mesh Ewald (SPME) algorithm which is used for the main part of the computation, previously ported to run on Intel Xeon Phi.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"100 1","pages":"9:1-9:2"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74723871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Calhoun, David Akin, Joshua Alexander, Brett Zimmerman, Fred Keller, Brandon George, Henry Neeman
In the era of Big Data, research productivity can be highly sensitive to the availability of large scale, long term archival storage. Unfortunately, many mass storage systems are prohibitively expensive at scales appropriate for individual institutions rather than for national centers. Furthermore, a key issue is the set of circumstances under which researchers can, and are willing to, adopt a centralized technology that, in a pure cost recovery model, might be, or might appear to be, more expensive than what the research teams could build on their own. This paper examines a business model that addresses these concerns in a comprehensive manner, distributing the costs among a funding agency, the institution and the research teams, thereby reducing the challenges faced by each.
{"title":"The Oklahoma PetaStore: A Business Model for Big Data on a Small Budget","authors":"Patrick Calhoun, David Akin, Joshua Alexander, Brett Zimmerman, Fred Keller, Brandon George, Henry Neeman","doi":"10.1145/2616498.2616548","DOIUrl":"https://doi.org/10.1145/2616498.2616548","url":null,"abstract":"In the era of Big Data, research productivity can be highly sensitive to the availability of large scale, long term archival storage. Unfortunately, many mass storage systems are prohibitively expensive at scales appropriate for individual institutions rather than for national centers. Furthermore, a key issue is the set of circumstances under which researchers can, and are willing to, adopt a centralized technology that, in a pure cost recovery model, might be, or might appear to be, more expensive than what the research teams could build on their own. This paper examines a business model that addresses these concerns in a comprehensive manner, distributing the costs among a funding agency, the institution and the research teams, thereby reducing the challenges faced by each.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"6 1","pages":"48:1-48:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80551384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Carbunescu, Aditya Devarakonda, J. Demmel, S. Gordon, Jay Alameda, Susan Mehringer
As parallel computing grows and becomes an essential part of computer science, tools must be developed to help grade assignments for large courses, especially with the prevalence of Massive Open Online Courses (MOOCs) increasing in recent years. This paper describes some of the general challenges related to building an autograder for parallel code with general suggestions and sample design decisions covering presented assignments. The paper explores the results and experiences from using these autograders to enable the XSEDE 2013 and 2014 Parallel Computing Course using resources from SDSC-Trestles, TACC-Stampede and PSC-Blacklight.
{"title":"Architecting an autograder for parallel code","authors":"R. Carbunescu, Aditya Devarakonda, J. Demmel, S. Gordon, Jay Alameda, Susan Mehringer","doi":"10.1145/2616498.2616571","DOIUrl":"https://doi.org/10.1145/2616498.2616571","url":null,"abstract":"As parallel computing grows and becomes an essential part of computer science, tools must be developed to help grade assignments for large courses, especially with the prevalence of Massive Open Online Courses (MOOCs) increasing in recent years. This paper describes some of the general challenges related to building an autograder for parallel code with general suggestions and sample design decisions covering presented assignments. The paper explores the results and experiences from using these autograders to enable the XSEDE 2013 and 2014 Parallel Computing Course using resources from SDSC-Trestles, TACC-Stampede and PSC-Blacklight.","PeriodicalId":93364,"journal":{"name":"Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)","volume":"2014 1","pages":"68:1-68:8"},"PeriodicalIF":0.0,"publicationDate":"2014-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87748978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proceedings of XSEDE16 : Diversity, Big Data, and Science at Scale : July 17-21, 2016, Intercontinental Miami Hotel, Miami, Florida, USA. Conference on Extreme Science and Engineering Discovery Environment (5th : 2016 : Miami, Fla.)