Given a graph pair, an acyclic task data-flow graph (DFG) and a processor network topology graph with 2-way communication channels, the latency-adaptive A* parallel resource mapping produces an efficient task execution schedule that can also be used to quantify the quality of a parallel software/hardware match. The network latency adaptive parallel mapping framework, from static task DFG, to parallel processor network topology graph, is aimed at automatically optimizing workflow task scheduling among computation cluster nodes or subnets, including CPU, multicore, VLIW and co-processor accelerators such as GPUs, DSPs, FPGA fabric blocks, etc. The latency-adaptive parallel mapper starts scheduling by assigning the highest priority task a centrally located, capable processor in the network topology, and then conservatively assigns additional nearby, capable network processor cores only as needed to improve computation efficiency with fewest, yet sufficient processors scheduled. For slower communication with high inter/intra-processor latency ratios, the adaptive parallel mapper automatically opts for fewer processor cores, or even schedules just a single sequential processor, over parallel processing. The examples tested on a simulated adaptive mapper, demonstrate that the latency-adaptive parallel resource mapping successfully achieves better cost-efficiency in comparison to fixed task-to-processor mapping, in nearly optimal speedup, using only fewer nearby processors, resulting in only 1 or no processor/switch hop in around 90% of the data transfers. Inversely for faster networks, more processors are scheduled automatically due to lower inter-processor latency. In extreme cases, where offloading next task to another processor may be faster than waiting for a processor to finish the current task (i.e., when inter/intra-processor latency ratio < 1), the latency adaptive mapper seems to extrapolate well on how pipeline processing can outperform parallel processing, offering a surprising bonus in this parallel resource mapping study.
{"title":"Adaptive latency-aware parallel resource mapping: task graph scheduling onto heterogeneous network topology","authors":"L. Shih","doi":"10.1145/2484762.2484787","DOIUrl":"https://doi.org/10.1145/2484762.2484787","url":null,"abstract":"Given a graph pair, an acyclic task data-flow graph (DFG) and a processor network topology graph with 2-way communication channels, the latency-adaptive A* parallel resource mapping produces an efficient task execution schedule that can also be used to quantify the quality of a parallel software/hardware match. The network latency adaptive parallel mapping framework, from static task DFG, to parallel processor network topology graph, is aimed at automatically optimizing workflow task scheduling among computation cluster nodes or subnets, including CPU, multicore, VLIW and co-processor accelerators such as GPUs, DSPs, FPGA fabric blocks, etc. The latency-adaptive parallel mapper starts scheduling by assigning the highest priority task a centrally located, capable processor in the network topology, and then conservatively assigns additional nearby, capable network processor cores only as needed to improve computation efficiency with fewest, yet sufficient processors scheduled. For slower communication with high inter/intra-processor latency ratios, the adaptive parallel mapper automatically opts for fewer processor cores, or even schedules just a single sequential processor, over parallel processing. The examples tested on a simulated adaptive mapper, demonstrate that the latency-adaptive parallel resource mapping successfully achieves better cost-efficiency in comparison to fixed task-to-processor mapping, in nearly optimal speedup, using only fewer nearby processors, resulting in only 1 or no processor/switch hop in around 90% of the data transfers. Inversely for faster networks, more processors are scheduled automatically due to lower inter-processor latency. In extreme cases, where offloading next task to another processor may be faster than waiting for a processor to finish the current task (i.e., when inter/intra-processor latency ratio < 1), the latency adaptive mapper seems to extrapolate well on how pipeline processing can outperform parallel processing, offering a surprising bonus in this parallel resource mapping study.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114417567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Sivagnanam, V. Astakhov, K. Yoshimoto, N. Carnevale, M. Martone, A. Majumdar, A. Bandrowski
In this paper, we describe the neuroscience gateway (NSG), which facilitates access to high performance computing resources for computational neuroscientists. Through a simple web-based portal, the NSG provides a streamlined environment for uploading models, specifying HPC job parameters, querying running job status, receiving job completion notices, and storing and retrieving output data. The NSG architecture transparently distributes user jobs to appropriate HPC resources available through the XSEDE organization.
{"title":"A neuroscience gateway: software and implementation","authors":"S. Sivagnanam, V. Astakhov, K. Yoshimoto, N. Carnevale, M. Martone, A. Majumdar, A. Bandrowski","doi":"10.1145/2484762.2484816","DOIUrl":"https://doi.org/10.1145/2484762.2484816","url":null,"abstract":"In this paper, we describe the neuroscience gateway (NSG), which facilitates access to high performance computing resources for computational neuroscientists. Through a simple web-based portal, the NSG provides a streamlined environment for uploading models, specifying HPC job parameters, querying running job status, receiving job completion notices, and storing and retrieving output data. The NSG architecture transparently distributes user jobs to appropriate HPC resources available through the XSEDE organization.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129750559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HIV, tuberculosis and histoplasmosis are infectious diseases that affect millions of people world-wide. We describe our efforts to find cures for these diseases using the technique of virtual screening to identify possible inhibitors for essential proteins in these organisms using one of the XSEDE supercomputers. We have completed the virtual screens and have found promising compounds for each disease. Cell culture experiments have supported the likelihood of a number of the compounds being effective for treating both histoplasmosis and tuberculosis.
{"title":"Attacking HIV, tuberculosis and histoplasmosis with XSEDE resources","authors":"David Toth, Jimmy Franco, C. Berkes","doi":"10.1145/2484762.2484766","DOIUrl":"https://doi.org/10.1145/2484762.2484766","url":null,"abstract":"HIV, tuberculosis and histoplasmosis are infectious diseases that affect millions of people world-wide. We describe our efforts to find cures for these diseases using the technique of virtual screening to identify possible inhibitors for essential proteins in these organisms using one of the XSEDE supercomputers. We have completed the virtual screens and have found promising compounds for each disease. Cell culture experiments have supported the likelihood of a number of the compounds being effective for treating both histoplasmosis and tuberculosis.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130085631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Szczepanski, Christal Yost, Norman Magden, Evan Meaney, Carolyn I. Staples
Staff from the University of Tennessee's Joint institute for Computational Sciences, National Institute for Computational Sciences, and Remote Data Analysis and Visualization Center have teamed up with faculty from UT's School of Art to engage with students, the public, and the research community on a number of projects that connect the arts with the science and computing disciplines. These collaborations have led to coursework for students, videos about scientific discovery, and the production of novel, computer-mediated artwork. Both the arts and the sciences have gained from these collaborations.
{"title":"Opposites attract: computational and quantitative outreach through artistic expressions","authors":"A. Szczepanski, Christal Yost, Norman Magden, Evan Meaney, Carolyn I. Staples","doi":"10.1145/2484762.2484772","DOIUrl":"https://doi.org/10.1145/2484762.2484772","url":null,"abstract":"Staff from the University of Tennessee's Joint institute for Computational Sciences, National Institute for Computational Sciences, and Remote Data Analysis and Visualization Center have teamed up with faculty from UT's School of Art to engage with students, the public, and the research community on a number of projects that connect the arts with the science and computing disciplines. These collaborations have led to coursework for students, videos about scientific discovery, and the production of novel, computer-mediated artwork. Both the arts and the sciences have gained from these collaborations.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126290308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a pressing need for a workforce with the simulation and modeling skills associated with computational science. A number of national studies have substantiated those needs with respect to the future competitiveness of the US in research and development, the innovation of new products, and the competitiveness of our industry [1,2,3].
{"title":"Development of undergraduate programs in computational science: panel","authors":"P. Molnár, David M. Toth, R. Vincent-Finley","doi":"10.1145/2484762.2484804","DOIUrl":"https://doi.org/10.1145/2484762.2484804","url":null,"abstract":"There is a pressing need for a workforce with the simulation and modeling skills associated with computational science. A number of national studies have substantiated those needs with respect to the future competitiveness of the US in research and development, the innovation of new products, and the competitiveness of our industry [1,2,3].","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"23 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122502094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Stanberry, Yuan Liu, Bhanu Rekepalli, Paul Giblock, R. Higdon, William Broomall
Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the PSU (Protein Sequence Universe) expands exponentially. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible whereas a high compute cost limits the utility of existing automated approaches. In this study, we built an automated workflow to enable large-scale protein annotation into existing orthologous groups using HPC (High Performance Computing) architectures. We developed a low complexity classification algorithm to assign proteins into bacterial COGs (Clusters of Orthologous Groups of proteins). Based on the PSI-BLAST (Position-Specific Iterative Basic Local Alignment Search Tool), the algorithm was validated on simulated and archaeal data to ensure at least 80% specificity and sensitivity. The workflow with highly scalable parallel applications for classification and sequence alignment was developed on XSEDE (Extreme Science and Engineering Discovery Environment) supercomputers. Using the workflow, we have classified one million newly sequenced bacterial proteins. With the rapid expansion of the PSU, the proposed workflow will enable scientists to annotate big genome data.
新测序基因组的功能注释是现代生物学的主要挑战之一。随着现代测序技术的发展,PSU(蛋白质序列宇宙)呈指数级增长。仅新测序的细菌基因组就包含超过750万个蛋白质。数据生成的速度远远超过了蛋白质注释的速度。大量的蛋白质数据使得人工管理不可行,而高计算成本限制了现有自动化方法的实用性。在这项研究中,我们构建了一个自动化的工作流程,使用高性能计算(HPC)架构对现有的同源组进行大规模的蛋白质注释。我们开发了一种低复杂度的分类算法来将蛋白质分配到细菌COGs(同源蛋白质群的簇)中。基于PSI-BLAST (Position-Specific Iterative Basic Local Alignment Search Tool,位置特定迭代基本局部比对搜索工具),在模拟和古细菌数据上验证了该算法,确保了至少80%的特异性和灵敏度。在XSEDE(极端科学与工程发现环境)超级计算机上开发了具有高度可扩展的分类和序列对齐并行应用程序的工作流。利用这个工作流程,我们已经对100万个新测序的细菌蛋白质进行了分类。随着PSU的快速扩展,所提出的工作流程将使科学家能够注释大的基因组数据。
{"title":"High performance computing workflow for protein functional annotation","authors":"L. Stanberry, Yuan Liu, Bhanu Rekepalli, Paul Giblock, R. Higdon, William Broomall","doi":"10.1145/2484762.2484809","DOIUrl":"https://doi.org/10.1145/2484762.2484809","url":null,"abstract":"Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the PSU (Protein Sequence Universe) expands exponentially. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible whereas a high compute cost limits the utility of existing automated approaches. In this study, we built an automated workflow to enable large-scale protein annotation into existing orthologous groups using HPC (High Performance Computing) architectures. We developed a low complexity classification algorithm to assign proteins into bacterial COGs (Clusters of Orthologous Groups of proteins). Based on the PSI-BLAST (Position-Specific Iterative Basic Local Alignment Search Tool), the algorithm was validated on simulated and archaeal data to ensure at least 80% specificity and sensitivity. The workflow with highly scalable parallel applications for classification and sequence alignment was developed on XSEDE (Extreme Science and Engineering Discovery Environment) supercomputers. Using the workflow, we have classified one million newly sequenced bacterial proteins. With the rapid expansion of the PSU, the proposed workflow will enable scientists to annotate big genome data.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131484754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Upcoming wide-area sky surveys offer the power to test the source of cosmic acceleration by placing extremely precise constraints on existing cosmological model parameters. These observational surveys will employ multiple tests based on statistical signatures of galaxies and larger-scale structures such as clusters of galaxies. Simulations of large-scale structure provide the means to maximize the power of sky survey tests by characterizing key sources of systematic uncertainties. We describe an XSEDE program to produce multiple synthetic sky surveys of galaxies and large-scale cosmic structure in support of science analysis for the Dark Energy Survey. We explain our Airavata-enabled methods and report extensions to our workflow processing over the last year. We highlight science analysis focused on counts of clusters of galaxies.
{"title":"Enabling dark energy survey science analysis with simulations on XSEDE resources","authors":"B. Erickson, Raminderjeet Singh, A. Evrard","doi":"10.1145/2484762.2484801","DOIUrl":"https://doi.org/10.1145/2484762.2484801","url":null,"abstract":"Upcoming wide-area sky surveys offer the power to test the source of cosmic acceleration by placing extremely precise constraints on existing cosmological model parameters. These observational surveys will employ multiple tests based on statistical signatures of galaxies and larger-scale structures such as clusters of galaxies. Simulations of large-scale structure provide the means to maximize the power of sky survey tests by characterizing key sources of systematic uncertainties. We describe an XSEDE program to produce multiple synthetic sky surveys of galaxies and large-scale cosmic structure in support of science analysis for the Dark Energy Survey. We explain our Airavata-enabled methods and report extensions to our workflow processing over the last year. We highlight science analysis focused on counts of clusters of galaxies.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127954830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An efficient parallel data structure implementation is presented to modify the permutation on the residual vector to achieve optimized memory layout of partitioned meshes for solving sparse linear systems. This novel algorithm is proposed to sort the data on each processor with respect to a set of rules. This simplifies implementation of parallel iterative solver algorithms and allows an overlap between non-blocking MPI communication and computations in matrix-vector product operations.
{"title":"Low entropy data mapping for sparse iterative linear solvers","authors":"M. Esmaily-Moghadam, Y. Bazilevs, A. Marsden","doi":"10.1145/2484762.2484797","DOIUrl":"https://doi.org/10.1145/2484762.2484797","url":null,"abstract":"An efficient parallel data structure implementation is presented to modify the permutation on the residual vector to achieve optimized memory layout of partitioned meshes for solving sparse linear systems. This novel algorithm is proposed to sort the data on each processor with respect to a set of rules. This simplifies implementation of parallel iterative solver algorithms and allows an overlap between non-blocking MPI communication and computations in matrix-vector product operations.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133907647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. Rohrdanz, Wenwei Zheng, Bradley Lambeth, C. Clementi
Photoactive yellow protein was first discovered in Halorhodospira halophilia, causing the bacterium to flee potentially DNA-damaging light, and serves as a model system for signaling proteins. Upon absorption of a blue photon, PYP's chromophore undergoes a trans-to-cis isomerization that disrupts the hydrogen bond network in the core of the protein, resulting in a large conformational change and transformation into the signaling state. Because of the timescales involved, conventional molecular dynamics simulation of this system is practically impossible. In addition, due to the short signaling state lifetime, experimental determination of the signaling-state structure is also challenging. Here we use a combination of tools we have developed: a coarse-grain model [4], an all-atom reconstruction technique [5], locally scaled diffusion maps [9], and our most recent technique diffusion map-directed molecular dynamics [14], to explore the elusive structure of the signaling state of PYP.
{"title":"Multiscale characterization of macromolecular dynamics: application to photoacitve yellow protein","authors":"M. A. Rohrdanz, Wenwei Zheng, Bradley Lambeth, C. Clementi","doi":"10.1145/2484762.2484836","DOIUrl":"https://doi.org/10.1145/2484762.2484836","url":null,"abstract":"Photoactive yellow protein was first discovered in Halorhodospira halophilia, causing the bacterium to flee potentially DNA-damaging light, and serves as a model system for signaling proteins. Upon absorption of a blue photon, PYP's chromophore undergoes a trans-to-cis isomerization that disrupts the hydrogen bond network in the core of the protein, resulting in a large conformational change and transformation into the signaling state. Because of the timescales involved, conventional molecular dynamics simulation of this system is practically impossible. In addition, due to the short signaling state lifetime, experimental determination of the signaling-state structure is also challenging. Here we use a combination of tools we have developed: a coarse-grain model [4], an all-atom reconstruction technique [5], locally scaled diffusion maps [9], and our most recent technique diffusion map-directed molecular dynamics [14], to explore the elusive structure of the signaling state of PYP.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131917358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard D. LeDuc, T. Doak, Le-Shin Wu, Philip D. Blood, C. Ganote, M. Vaughn
The National Center for Genome Analysis Support (NCGAS) is a response to the concern that NSF-funded life scientists were underutilizing the national cyberinfrastructure, because there has been little effort to tailor these resources to the life scientist communities needs. NCGAS is a multi-institutional service center that provides computational resources, specialized systems support to both the end-user and systems administrators, curated sets of applications, and most importantly scientific consultations for domain scientists unfamiliar with next generation DNA sequence data analysis. NCGAS is a partnership between Indiana University Pervasive Technology Institute, Texas Advanced Computing Center, San Diego Supercomputing Center, and the Pittsburgh Supercomputing Center. NCGAS provides hardened bioinformatic applications and user support on all aspects of a user's data analysis, including data management, systems usage, bioinformatics, and biostatistics related issues.
{"title":"National Center for Genome Analysis support leverages XSEDE to support life science research","authors":"Richard D. LeDuc, T. Doak, Le-Shin Wu, Philip D. Blood, C. Ganote, M. Vaughn","doi":"10.1145/2484762.2484790","DOIUrl":"https://doi.org/10.1145/2484762.2484790","url":null,"abstract":"The National Center for Genome Analysis Support (NCGAS) is a response to the concern that NSF-funded life scientists were underutilizing the national cyberinfrastructure, because there has been little effort to tailor these resources to the life scientist communities needs. NCGAS is a multi-institutional service center that provides computational resources, specialized systems support to both the end-user and systems administrators, curated sets of applications, and most importantly scientific consultations for domain scientists unfamiliar with next generation DNA sequence data analysis. NCGAS is a partnership between Indiana University Pervasive Technology Institute, Texas Advanced Computing Center, San Diego Supercomputing Center, and the Pittsburgh Supercomputing Center. NCGAS provides hardened bioinformatic applications and user support on all aspects of a user's data analysis, including data management, systems usage, bioinformatics, and biostatistics related issues.","PeriodicalId":426819,"journal":{"name":"Proceedings of the Conference on Extreme Science and Engineering Discovery Environment: Gateway to Discovery","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133356799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}