L. Pearlman, C. Kesselman, S. Gullapalli, B. Spencer, J. Futrelle, K. Ricker, Ian T Foster, P. Hubbard, C. Severance
Earthquake engineers have traditionally investigated the behavior of structures with either computational simulations or physical experiments. Recently, a new hybrid approach has been proposed that allows tests to be decomposed into independent substructures that can be located at different test facilities, tested separately, and integrated via a computational simulation. We describe a grid-based architecture for performing such novel distributed hybrid computational/physical experiments. We discuss the requirements that underlie this extremely challenging application of grid technologies, describe our architecture and implementation, and discuss our experiences with the application of this architecture within an unprecedented earthquake engineering test that coupled large-scale physical experiments in Illinois and Colorado with a computational simulation. Our results point to the remarkable impacts that grid technologies can have on the practice of engineering, and also contribute to our understanding of how to build and deploy effective grid applications.
{"title":"Distributed hybrid earthquake engineering experiments: experiences with a ground-shaking grid application","authors":"L. Pearlman, C. Kesselman, S. Gullapalli, B. Spencer, J. Futrelle, K. Ricker, Ian T Foster, P. Hubbard, C. Severance","doi":"10.1109/HPDC.2004.11","DOIUrl":"https://doi.org/10.1109/HPDC.2004.11","url":null,"abstract":"Earthquake engineers have traditionally investigated the behavior of structures with either computational simulations or physical experiments. Recently, a new hybrid approach has been proposed that allows tests to be decomposed into independent substructures that can be located at different test facilities, tested separately, and integrated via a computational simulation. We describe a grid-based architecture for performing such novel distributed hybrid computational/physical experiments. We discuss the requirements that underlie this extremely challenging application of grid technologies, describe our architecture and implementation, and discuss our experiences with the application of this architecture within an unprecedented earthquake engineering test that coupled large-scale physical experiments in Illinois and Colorado with a computational simulation. Our results point to the remarkable impacts that grid technologies can have on the practice of engineering, and also contribute to our understanding of how to build and deploy effective grid applications.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117137413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The SOAP protocol has emerged as a Web Service communication standard, providing simplicity, robustness, and extensibility. SOAP's relatively poor performance threatens to limit its usefulness, especially for high-performance scientific applications. The serialization of outgoing messages, which includes conversion of in-memory data types to XML-based string format and the packing of this data into message buffers, is a primary SOAP performance bottleneck. We describe the design and implementation of differential serialization, a SOAP optimization technique that can help bypass the serialization step for messages similar to those previously sent by a SOAP client or previously returned by a SOAP-based Web Service. The approach requires no changes to the SOAP protocol. Our implementation and performance study demonstrate the technique *s potential, showing a substantial performance improvement over widely used SOAP toolkits that do not employ the optimization. We identify several factors that determine the usefulness and applicability of differential serialization, present a set of techniques for increasing the situations in which it can be used, and explore the design space of the approach.
{"title":"Differential serialization for optimized SOAP performance","authors":"N. Abu-Ghazaleh, M. Lewis, M. Govindaraju","doi":"10.1109/HPDC.2004.8","DOIUrl":"https://doi.org/10.1109/HPDC.2004.8","url":null,"abstract":"The SOAP protocol has emerged as a Web Service communication standard, providing simplicity, robustness, and extensibility. SOAP's relatively poor performance threatens to limit its usefulness, especially for high-performance scientific applications. The serialization of outgoing messages, which includes conversion of in-memory data types to XML-based string format and the packing of this data into message buffers, is a primary SOAP performance bottleneck. We describe the design and implementation of differential serialization, a SOAP optimization technique that can help bypass the serialization step for messages similar to those previously sent by a SOAP client or previously returned by a SOAP-based Web Service. The approach requires no changes to the SOAP protocol. Our implementation and performance study demonstrate the technique *s potential, showing a substantial performance improvement over widely used SOAP toolkits that do not employ the optimization. We identify several factors that determine the usefulness and applicability of differential serialization, present a set of techniques for increasing the situations in which it can be used, and explore the design space of the approach.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"17 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125791470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A self-configuring sendee can automatically leverage distributed sendee components and resources to compose an optimal configuration according to both the requirements of a particular user and the system characteristics. One major challenge for building such sendees is how to bring in sendee-specific knowledge, e.g., what components are needed and optimization criteria to use, while still allowing reuse of common sendee composition functionalities. We present an architecture in which service developers express their service-specific knowledge in the form of a service recipe that is used by a generic synthesizer to perform sendee composition automatically. We apply our approach to three different services to illustrate the flexibility and simplicity of the recipe representation. We use simulations based on Internet measurements to evaluate how an appropriate optimization algorithm can be selected according to a developer's sendee-specific trade-off between optimality and cost of optimization.
{"title":"Building self-configuring services using service-specific knowledge","authors":"An-Cheng Huang, P. Steenkiste","doi":"10.1109/HPDC.2004.6","DOIUrl":"https://doi.org/10.1109/HPDC.2004.6","url":null,"abstract":"A self-configuring sendee can automatically leverage distributed sendee components and resources to compose an optimal configuration according to both the requirements of a particular user and the system characteristics. One major challenge for building such sendees is how to bring in sendee-specific knowledge, e.g., what components are needed and optimization criteria to use, while still allowing reuse of common sendee composition functionalities. We present an architecture in which service developers express their service-specific knowledge in the form of a service recipe that is used by a generic synthesizer to perform sendee composition automatically. We apply our approach to three different services to illustrate the flexibility and simplicity of the recipe representation. We use simulations based on Internet measurements to evaluate how an appropriate optimization algorithm can be selected according to a developer's sendee-specific trade-off between optimality and cost of optimization.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132380848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Active harmony provides a way to automate performance tuning. We apply the Active Harmony system to improve the performance of a cluster-based web service system. The performance improvement cannot easily be achieved by tuning individual components for such a system. The experimental results show that there is no single configuration for the system that performs well for all kinds of workloads. By tuning the parameters, Active Harmony helps the system adapt to different workloads and improve the performance up to 16%. For scalability, we demonstrate how to reduce the time when tuning a large system with many tunable parameters. Finally an algorithm is proposed to automatically adjust the structure of cluster-based web systems, and the system throughput is improved up to 70% using this technique.
{"title":"Automated cluster-based Web service performance tuning","authors":"I. Chung, J. Hollingsworth","doi":"10.1109/HPDC.2004.4","DOIUrl":"https://doi.org/10.1109/HPDC.2004.4","url":null,"abstract":"Active harmony provides a way to automate performance tuning. We apply the Active Harmony system to improve the performance of a cluster-based web service system. The performance improvement cannot easily be achieved by tuning individual components for such a system. The experimental results show that there is no single configuration for the system that performs well for all kinds of workloads. By tuning the parameters, Active Harmony helps the system adapt to different workloads and improve the performance up to 16%. For scalability, we demonstrate how to reduce the time when tuning a large system with many tunable parameters. Finally an algorithm is proposed to automatically adjust the structure of cluster-based web systems, and the system throughput is improved up to 70% using this technique.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117324222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our work is motivated by the large number of data stream sources that define mesoscale meteorology where asynchronous streams are commonplace. Techniques for performing filtering, aggregation, and transformation on multiple streams must be effective for the case of asynchronous streams. Rate Sizing algorithm (RS-Algo) links the number of events waiting to participate in a join to the rate of the streams responsible for their delivery. In this poster, we show the results of performance evaluation of the RS-Algo. The gains in memory utilization are largest under asynchronous streams.
{"title":"Performance evaluation of rate-based join window sizing for asynchronous data streams","authors":"N. Vijayakumar, Beth Plale","doi":"10.1109/HPDC.2004.28","DOIUrl":"https://doi.org/10.1109/HPDC.2004.28","url":null,"abstract":"Our work is motivated by the large number of data stream sources that define mesoscale meteorology where asynchronous streams are commonplace. Techniques for performing filtering, aggregation, and transformation on multiple streams must be effective for the case of asynchronous streams. Rate Sizing algorithm (RS-Algo) links the number of events waiting to participate in a join to the rate of the streams responsible for their delivery. In this poster, we show the results of performance evaluation of the RS-Algo. The gains in memory utilization are largest under asynchronous streams.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate the question of scheduling tasks according to a user-centric value metric-called yield or utility. User value is an attractive basis for allocating shared computing resources, and is fundamental to economic approaches to resource management in linked clusters or grids. Even so, commonly used batch schedulers do not yet support value-based scheduling, and there has been little study of its use in a market-based grid setting. In part this is because scheduling to maximize time-varying value is a difficult problem where even simple formulations are intractable. We present improved heuristics for value-based task scheduling using a simple but rich formulation of value, in which a task's yield decays linearly with its waiting time. We also show the role of value-based scheduling heuristics in a framework for market-based bidding and admission control, in which clients negotiate for task services from multiple grid sites. Our approach follows an investment metaphor: the heuristics balance the risk of future costs against the potential for gains in accepting and scheduling tasks. In particular, we show the importance of opportunity cost, and the impact of risk due to uncertainty in the future job mix.
{"title":"Balancing risk and reward in a market-based task service","authors":"David E. Irwin, Laura E. Grit, J. Chase","doi":"10.1109/HPDC.2004.5","DOIUrl":"https://doi.org/10.1109/HPDC.2004.5","url":null,"abstract":"We investigate the question of scheduling tasks according to a user-centric value metric-called yield or utility. User value is an attractive basis for allocating shared computing resources, and is fundamental to economic approaches to resource management in linked clusters or grids. Even so, commonly used batch schedulers do not yet support value-based scheduling, and there has been little study of its use in a market-based grid setting. In part this is because scheduling to maximize time-varying value is a difficult problem where even simple formulations are intractable. We present improved heuristics for value-based task scheduling using a simple but rich formulation of value, in which a task's yield decays linearly with its waiting time. We also show the role of value-based scheduling heuristics in a framework for market-based bidding and admission control, in which clients negotiate for task services from multiple grid sites. Our approach follows an investment metaphor: the heuristics balance the risk of future costs against the potential for gains in accepting and scheduling tasks. In particular, we show the importance of opportunity cost, and the impact of risk due to uncertainty in the future job mix.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123946800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ian T Foster, J. Gieraltowski, Scott Gose, N. Maltsev, E. May, Alex Rodriguez, Dinanath Sulakhe, A. Vaniachine, J. Shank, S. Youssef, D. Adams, R. Baker, W. Deng, J. Smith, Dantong Yu, I. Legrand, Suresh Singh, C. Steenberg, Yang Xia, M. Afaq, E. Berman, J. Annis, L. Bauerdick, M. Ernst, I. Fisk, L. Giacchetti, G. Graham, A. Heavey, J. Kaiser, N. Kuropatkin, R. Pordes, V. Sekhri, J. Weigand, Yujun Wu, Keith Baker, Lawrence Sorrillo, J. Huth, Matthew Allen, L. Grundhoefer, J. Hicks, F. Luehring, S. Peck, R. Quick, Stephen C. Simms, G. Fekete, Jan vandenBerg, Kihyeon Cho, Kihwan Kwon, Dongchul Son, Hyoungwoo Park, S. Canon, K. Jackson, D. Konerding, Jason R. Lee, D. Olson, I. Sakrejda, B. Tierney, Mark L. Green, Russ Miller, J. Letts, T. Martin, David Bury, C. Dumitrescu, D. Engh, R. Gardner, M. Mambelli, Y. Smirnov, Jens-S. Vöckler, M. Wilde, Yong Zhao, Xin Zhao, P. Avery, R. Cavanaugh, Bockjoo Kim, C. Prescott, J. Rodriguez, A. Zahn, S. McKee, C. Jordan, James E. Prewett, T. Thomas, H. Severini, Ben Cliff
The Grid2003 Project has deployed a multivirtual organization, application-driven grid laboratory ("Grid3") that has sustained for several months the production-level services required by physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in molecular structure analysis and genome analysis, and computer science research projects in such areas as job and data scheduling. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. We describe the principles that have guided the development of this unique infrastructure and the practical experiences that have resulted from its creation and use. We discuss application requirements for grid services deployment and configuration, monitoring infrastructure, application performance, metrics, and operational experiences. We also summarize lessons learned.
{"title":"The Grid2003 production grid: principles and practice","authors":"Ian T Foster, J. Gieraltowski, Scott Gose, N. Maltsev, E. May, Alex Rodriguez, Dinanath Sulakhe, A. Vaniachine, J. Shank, S. Youssef, D. Adams, R. Baker, W. Deng, J. Smith, Dantong Yu, I. Legrand, Suresh Singh, C. Steenberg, Yang Xia, M. Afaq, E. Berman, J. Annis, L. Bauerdick, M. Ernst, I. Fisk, L. Giacchetti, G. Graham, A. Heavey, J. Kaiser, N. Kuropatkin, R. Pordes, V. Sekhri, J. Weigand, Yujun Wu, Keith Baker, Lawrence Sorrillo, J. Huth, Matthew Allen, L. Grundhoefer, J. Hicks, F. Luehring, S. Peck, R. Quick, Stephen C. Simms, G. Fekete, Jan vandenBerg, Kihyeon Cho, Kihwan Kwon, Dongchul Son, Hyoungwoo Park, S. Canon, K. Jackson, D. Konerding, Jason R. Lee, D. Olson, I. Sakrejda, B. Tierney, Mark L. Green, Russ Miller, J. Letts, T. Martin, David Bury, C. Dumitrescu, D. Engh, R. Gardner, M. Mambelli, Y. Smirnov, Jens-S. Vöckler, M. Wilde, Yong Zhao, Xin Zhao, P. Avery, R. Cavanaugh, Bockjoo Kim, C. Prescott, J. Rodriguez, A. Zahn, S. McKee, C. Jordan, James E. Prewett, T. Thomas, H. Severini, Ben Cliff","doi":"10.1109/HPDC.2004.36","DOIUrl":"https://doi.org/10.1109/HPDC.2004.36","url":null,"abstract":"The Grid2003 Project has deployed a multivirtual organization, application-driven grid laboratory (\"Grid3\") that has sustained for several months the production-level services required by physics experiments of the Large Hadron Collider at CERN (ATLAS and CMS), the Sloan Digital Sky Survey project, the gravitational wave search experiment LIGO, the BTeV experiment at Fermilab, as well as applications in molecular structure analysis and genome analysis, and computer science research projects in such areas as job and data scheduling. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. We describe the principles that have guided the development of this unique infrastructure and the practical experiences that have resulted from its creation and use. We discuss application requirements for grid services deployment and configuration, monitoring infrastructure, application performance, metrics, and operational experiences. We also summarize lessons learned.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130876664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Component-based approaches are becoming increasingly popular in the areas of adaptive distributed systems, Web services, and grid computing. In each case, the underlying infrastructure needs to address a deployment problem involving the placement of application components onto computational, data, and network resources across a wide-area environment subject to a variety of qualitative and quantitative constraints. In general, the deployment needs to also introduce auxiliary components (e.g., to compress/decompress data, or invoke GridFTP sessions to make data available at a remote site), and reuse preexisting components and data. To provide the flexibility required in the latter case, recently proposed systems such as Sekitei and Pegasus have proposed solutions that rely upon Al planning-based techniques. Although promising, the inherent complexity of Al planning and the fact that constraints governing component deployment often involve nonlinear and nonreversible functions have prevented such solutions from generating deployments in resource-constrained situations and achieving optimality in terms of overall resource usage or other cost metrics. We address both of these shortcomings in the context of the Sekitei system. Our extension relies upon information supplied by a domain expert, which classifies component behavior into a discrete set of levels. This discretization, often justified in practice, permits the planner to identify cost-optimal plans (whose quality improves with the level definitions) without restricting the form of the constraint functions. We describe the modified Sekitei algorithm, and characterize, using a media stream delivery application, its scaling behavior when generating optimal deployments for various network configurations.
{"title":"Optimal resource-aware deployment planning for component-based distributed applications","authors":"T. Kichkaylo, V. Karamcheti","doi":"10.1109/HPDC.2004.25","DOIUrl":"https://doi.org/10.1109/HPDC.2004.25","url":null,"abstract":"Component-based approaches are becoming increasingly popular in the areas of adaptive distributed systems, Web services, and grid computing. In each case, the underlying infrastructure needs to address a deployment problem involving the placement of application components onto computational, data, and network resources across a wide-area environment subject to a variety of qualitative and quantitative constraints. In general, the deployment needs to also introduce auxiliary components (e.g., to compress/decompress data, or invoke GridFTP sessions to make data available at a remote site), and reuse preexisting components and data. To provide the flexibility required in the latter case, recently proposed systems such as Sekitei and Pegasus have proposed solutions that rely upon Al planning-based techniques. Although promising, the inherent complexity of Al planning and the fact that constraints governing component deployment often involve nonlinear and nonreversible functions have prevented such solutions from generating deployments in resource-constrained situations and achieving optimality in terms of overall resource usage or other cost metrics. We address both of these shortcomings in the context of the Sekitei system. Our extension relies upon information supplied by a domain expert, which classifies component behavior into a discrete set of levels. This discretization, often justified in practice, permits the planner to identify cost-optimal plans (whose quality improves with the level definitions) without restricting the form of the constraint functions. We describe the modified Sekitei algorithm, and characterize, using a media stream delivery application, its scaling behavior when generating optimal deployments for various network configurations.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GridShell maps grid and distributed computing concepts to the UNIX shell programming environment. This allows scientific and engineering users, already familiar with Tenex C Shell (tcsh) and Bourne Again Shell (bash), to quickly use the services provided by these distributed resources. Our demo will show how the scripting capabilities of GridShell can be used to orchestrate and coordinate the workflow of an engineering simulation using an established 2D flow solver.
GridShell将网格和分布式计算概念映射到UNIX shell编程环境。这使得已经熟悉Tenex C Shell (tcsh)和Bourne Again Shell (bash)的科学和工程用户可以快速使用这些分布式资源提供的服务。我们的演示将展示GridShell的脚本功能如何使用已建立的2D流求解器来编排和协调工程仿真的工作流程。
{"title":"Orchestrating and coordinating scientific/engineering workflows using GridShell","authors":"Edward Walker, T. Minyard","doi":"10.1109/HPDC.2004.26","DOIUrl":"https://doi.org/10.1109/HPDC.2004.26","url":null,"abstract":"GridShell maps grid and distributed computing concepts to the UNIX shell programming environment. This allows scientific and engineering users, already familiar with Tenex C Shell (tcsh) and Bourne Again Shell (bash), to quickly use the services provided by these distributed resources. Our demo will show how the scripting capabilities of GridShell can be used to orchestrate and coordinate the workflow of an engineering simulation using an established 2D flow solver.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128160521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Advances in communication for parallel programming have yielded one-sided messaging systems. The MPI bindings for Ruby have been augmented to include the remote memory access functions of MPI-2.
{"title":"MPI Ruby with remote memory access","authors":"Christopher C. Aycock","doi":"10.1109/HPDC.2004.24","DOIUrl":"https://doi.org/10.1109/HPDC.2004.24","url":null,"abstract":"Advances in communication for parallel programming have yielded one-sided messaging systems. The MPI bindings for Ruby have been augmented to include the remote memory access functions of MPI-2.","PeriodicalId":446429,"journal":{"name":"Proceedings. 13th IEEE International Symposium on High performance Distributed Computing, 2004.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128390787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}