Volunteer-based grid computing resources are characteristically volatile and frequently become unavailable due to the autonomy that owners maintain over them. This resource volatility has significant influence on the applications the resources host. Availability predictors can forecast unavailability, and can provide schedulers with information about reliability, which helps them make better scheduling decisions when combined with information about speed and load. This paper studies using this prediction information for deciding when to replicate jobs. In particular, our predictors forecast the probability that a job will complete uninterrupted, and our schedulers replicate those jobs that are least likely to do so. Our strategies outperform other comparable replication strategies, as measured by improved make span and fewer redundant operations. We define a new ``replication efficiency" metric, and demonstrate that our availability predictor can provide information that allows our schedulers to be more efficient than the most closely related replication strategy for a variety of loads in a trace-based grid simulation. We demonstrate that under low load conditions, our techniques come within 6% of the makespan improvement of a previously proposed replication technique while creating 76.8% fewer replicas and under higher loads, can improve makespan marginally while creating 72.5% fewer replicas.
{"title":"Availability Prediction Based Replication Strategies for Grid Environments","authors":"Brent Rood, M. Lewis","doi":"10.1109/CCGRID.2010.121","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.121","url":null,"abstract":"Volunteer-based grid computing resources are characteristically volatile and frequently become unavailable due to the autonomy that owners maintain over them. This resource volatility has significant influence on the applications the resources host. Availability predictors can forecast unavailability, and can provide schedulers with information about reliability, which helps them make better scheduling decisions when combined with information about speed and load. This paper studies using this prediction information for deciding when to replicate jobs. In particular, our predictors forecast the probability that a job will complete uninterrupted, and our schedulers replicate those jobs that are least likely to do so. Our strategies outperform other comparable replication strategies, as measured by improved make span and fewer redundant operations. We define a new ``replication efficiency\" metric, and demonstrate that our availability predictor can provide information that allows our schedulers to be more efficient than the most closely related replication strategy for a variety of loads in a trace-based grid simulation. We demonstrate that under low load conditions, our techniques come within 6% of the makespan improvement of a previously proposed replication technique while creating 76.8% fewer replicas and under higher loads, can improve makespan marginally while creating 72.5% fewer replicas.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134240623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Gesing, Jano van Hemert, J. Koetsier, A. Bertsch, O. Kohlbacher
Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly suggests the use of grid-based high-performance computing for analysis. TOPP and OpenMS are open-source packages for proteomics data analysis, however, they do not provide support for Grid computing. In this work we present a portal interface for high-throughput data analysis with TOPP. The portal is based on Rapid, a tool for efficiently generating standardized port lets for a wide range of applications. The web-based interface allows the creation and editing of user-defined pipelines and their execution and monitoring on a Grid infrastructure. The portal also supports several file transfer protocols for data staging. It thus provides a simple and complete solution to high-throughput proteomics data analysis for inexperienced users through a convenient portal interface.
{"title":"TOPP goes Rapid The OpenMS Proteomics Pipeline in a Grid-Enabled Web Portal","authors":"S. Gesing, Jano van Hemert, J. Koetsier, A. Bertsch, O. Kohlbacher","doi":"10.1109/CCGRID.2010.39","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.39","url":null,"abstract":"Proteomics, the study of all the proteins contained in a particular sample, e.g., a cell, is a key technology in current biomedical research. The complexity and volume of proteomics data sets produced by mass spectrometric methods clearly suggests the use of grid-based high-performance computing for analysis. TOPP and OpenMS are open-source packages for proteomics data analysis, however, they do not provide support for Grid computing. In this work we present a portal interface for high-throughput data analysis with TOPP. The portal is based on Rapid, a tool for efficiently generating standardized port lets for a wide range of applications. The web-based interface allows the creation and editing of user-defined pipelines and their execution and monitoring on a Grid infrastructure. The portal also supports several file transfer protocols for data staging. It thus provides a simple and complete solution to high-throughput proteomics data analysis for inexperienced users through a convenient portal interface.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132885412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergei Shudler, Lior Amar, A. Barak, Ahuva Mu'alem
Markets of computing resources typically consist of a cluster (or a multi-cluster) and jobs that arrive over time and request computing resources in exchange for payment. In this paper we study a real system that is capable of preemptive process migration (i.e. moving jobs across nodes) and that uses a market-based resource allocation mechanism for job allocation. Specifically, we formalize our system into a market model and employ simulation-based analysis (performed on real data) to study the effects of users' behavior on performance and utility. Typically online settings are characterized by a large amount of uncertainty, therefore it is reasonable to assume that users will consider simple strategies to game the system. We thus suggest a novel approach to modeling users' behavior called the Small Risk-aggressive Group model. We show that under this model untruthful users experience degraded performance. The main result and the contribution of this paper is that using the k-th price payment scheme, which is a natural adaptation of the classical second-price scheme, discourages these users from attempting to game the market. The preemptive capability makes it possible not only to use the k-th price scheme, but also makes our scheduling algorithm superior to other non-preemptive algorithms. Finally, we design a simple one-shot game to model the interaction between the provider and the consumers. We then show (using the same simulation-based analysis) that market stability in the form of (symmetric) Nash-equilibrium is likely to be achieved in several cases.
{"title":"The Effects of Untruthful Bids on User Utilities and Stability in Computing Markets","authors":"Sergei Shudler, Lior Amar, A. Barak, Ahuva Mu'alem","doi":"10.1109/CCGRID.2010.57","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.57","url":null,"abstract":"Markets of computing resources typically consist of a cluster (or a multi-cluster) and jobs that arrive over time and request computing resources in exchange for payment. In this paper we study a real system that is capable of preemptive process migration (i.e. moving jobs across nodes) and that uses a market-based resource allocation mechanism for job allocation. Specifically, we formalize our system into a market model and employ simulation-based analysis (performed on real data) to study the effects of users' behavior on performance and utility. Typically online settings are characterized by a large amount of uncertainty, therefore it is reasonable to assume that users will consider simple strategies to game the system. We thus suggest a novel approach to modeling users' behavior called the Small Risk-aggressive Group model. We show that under this model untruthful users experience degraded performance. The main result and the contribution of this paper is that using the k-th price payment scheme, which is a natural adaptation of the classical second-price scheme, discourages these users from attempting to game the market. The preemptive capability makes it possible not only to use the k-th price scheme, but also makes our scheduling algorithm superior to other non-preemptive algorithms. Finally, we design a simple one-shot game to model the interaction between the provider and the consumers. We then show (using the same simulation-based analysis) that market stability in the form of (symmetric) Nash-equilibrium is likely to be achieved in several cases.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134474275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Characterizing the communication behavior of parallel programs through tracing can help understand an application’s characteristics, model its performance, and predict behavior on future systems. However, lossless communication traces can get prohibitively large, causing programmers to resort to variety of other techniques. In this paper, we present a novel approach to lossless communication trace compression. We augment the sequitur compression algorithm to employ it in communication trace compression of parallel programs. We present optimizations to reduce the memory overhead, reduce size of the trace files generated, and enable compression across multiple processes in a parallel program. The evaluation shows improved compression and reduced overhead over other approaches, with up to 3 orders of magnitude improvement for the NAS MG benchmark. We also observe that, unlike existing schemes, the trace files sizes and the memory overhead incurred are less sensitive to, if not independent of, the problem size for the NAS benchmarks.
{"title":"Scalable Communication Trace Compression","authors":"S. Krishnamoorthy, Khushbu Agarwal","doi":"10.1109/CCGRID.2010.111","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.111","url":null,"abstract":"Characterizing the communication behavior of parallel programs through tracing can help understand an application’s characteristics, model its performance, and predict behavior on future systems. However, lossless communication traces can get prohibitively large, causing programmers to resort to variety of other techniques. In this paper, we present a novel approach to lossless communication trace compression. We augment the sequitur compression algorithm to employ it in communication trace compression of parallel programs. We present optimizations to reduce the memory overhead, reduce size of the trace files generated, and enable compression across multiple processes in a parallel program. The evaluation shows improved compression and reduced overhead over other approaches, with up to 3 orders of magnitude improvement for the NAS MG benchmark. We also observe that, unlike existing schemes, the trace files sizes and the memory overhead incurred are less sensitive to, if not independent of, the problem size for the NAS benchmarks.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130120586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite extensive research focused on enabling QoS for grid users through economic and intelligent resource provisioning, no consensus has emerged on the most promising strategies. On top of intrinsically challenging problems, the complexity and size of data has so far drastically limited the number of comparative experiments. An alternative to experimenting on real, large, and complex data, is to look for well-founded and parsimonious representations. This study is based on exhaustive information about the gLite-monitored jobs from the EGEE grid, representative of a significant fraction of e-science computing activity in Europe. Our main contributions are twofold. First we found that workload models for this grid can consistently be discovered from the real data, and that limiting the range of models to piecewise linear time series models is sufficiently powerful. Second, we present a bootstrapping strategy for building more robust models from the limited samples at hand.
{"title":"Discovering Piecewise Linear Models of Grid Workload","authors":"Tamás Éltetö, C. Germain, P. Bondon, M. Sebag","doi":"10.1109/CCGRID.2010.69","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.69","url":null,"abstract":"Despite extensive research focused on enabling QoS for grid users through economic and intelligent resource provisioning, no consensus has emerged on the most promising strategies. On top of intrinsically challenging problems, the complexity and size of data has so far drastically limited the number of comparative experiments. An alternative to experimenting on real, large, and complex data, is to look for well-founded and parsimonious representations. This study is based on exhaustive information about the gLite-monitored jobs from the EGEE grid, representative of a significant fraction of e-science computing activity in Europe. Our main contributions are twofold. First we found that workload models for this grid can consistently be discovered from the real data, and that limiting the range of models to piecewise linear time series models is sufficiently powerful. Second, we present a bootstrapping strategy for building more robust models from the limited samples at hand.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132184631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advent of multi-core processors has made parallel computing techniques mandatory on main stream systems. With the recent rise of hardware accelerators, hybrid parallelism adds yet another dimension of complexity to the process of software development. This article presents a tool for graphical program flow analysis of hardware accelerated parallel programs. It monitors the hybrid program execution to record and visualize many performance relevant events along the way. Representative real-world applications written for both IBM’s Cell processor and NVIDIA’s CUDA API are studied exemplarily. To the best of our knowledge, this approach is the first that visualizes the parallelism in hybrid multi-core systems at the presented level of detail.
{"title":"High Resolution Program Flow Visualization of Hardware Accelerated Hybrid Multi-core Applications","authors":"D. Hackenberg, G. Juckeland, H. Brunst","doi":"10.1109/CCGRID.2010.27","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.27","url":null,"abstract":"The advent of multi-core processors has made parallel computing techniques mandatory on main stream systems. With the recent rise of hardware accelerators, hybrid parallelism adds yet another dimension of complexity to the process of software development. This article presents a tool for graphical program flow analysis of hardware accelerated parallel programs. It monitors the hybrid program execution to record and visualize many performance relevant events along the way. Representative real-world applications written for both IBM’s Cell processor and NVIDIA’s CUDA API are studied exemplarily. To the best of our knowledge, this approach is the first that visualizes the parallelism in hybrid multi-core systems at the presented level of detail.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116584283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Service Oriented Architecture (SOA) is embraced in distributed and grid computing to produce high performance results for long time. SOA is likened by application programmers for its trademark characteristics of programmability, efficiency in heterogeneous conditions and fault-tolerance. It has worked well for high performance financial applications. but not for scientific applications which are too fine grained and communication intensive to be efficient on distributed environments. This paper argues that to make SOA model work well for those scientific applications, we need to reduce overhead costs associated with smaller task loads arising from finer granularity and increased communications in those applications. This paper proposes a data service to be used along with the existing compute services in SOA middlewares to enable inter-communication of finer tasks with out loosing SOA properties of programmability and efficiency under heterogeneity. This data service shall better enable high performance scientific computing of medium to fine grained scientific applications.
{"title":"Service Oriented Approach to High Performance Scientific Computing","authors":"J. Mulerikkal, P. Strazdins","doi":"10.1109/CCGRID.2010.93","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.93","url":null,"abstract":"The Service Oriented Architecture (SOA) is embraced in distributed and grid computing to produce high performance results for long time. SOA is likened by application programmers for its trademark characteristics of programmability, efficiency in heterogeneous conditions and fault-tolerance. It has worked well for high performance financial applications. but not for scientific applications which are too fine grained and communication intensive to be efficient on distributed environments. This paper argues that to make SOA model work well for those scientific applications, we need to reduce overhead costs associated with smaller task loads arising from finer granularity and increased communications in those applications. This paper proposes a data service to be used along with the existing compute services in SOA middlewares to enable inter-communication of finer tasks with out loosing SOA properties of programmability and efficiency under heterogeneity. This data service shall better enable high performance scientific computing of medium to fine grained scientific applications.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116603379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Young Choon Lee, Chen Wang, Albert Y. Zomaya, B. Zhou
A primary driving force of the recent cloud computing paradigm is its inherent cost effectiveness. As in many basic utilities, such as electricity and water, consumers/clients in cloud computing environments are charged based on their service usage, hence the term ‘pay-per-use’. While this pricing model is very appealing for both service providers and consumers, fluctuating service request volume and conflicting objectives (e.g., profit vs. response time) between providers and consumers hinder its effective application to cloud computing environments. In this paper, we address the problem of service request scheduling in cloud computing systems. We consider a three-tier cloud structure, which consists of infrastructure vendors, service providers and consumers, the latter two parties are particular interest to us. Clearly, scheduling strategies in this scenario should satisfy the objectives of both parties. Our contributions include the development of a pricing model—using processor-sharing—for clouds, the application of this pricing model to composite services with dependency consideration (to the best of our knowledge, the work in this study is the first attempt), and the development of two sets of profit-driven scheduling algorithms.
{"title":"Profit-Driven Service Request Scheduling in Clouds","authors":"Young Choon Lee, Chen Wang, Albert Y. Zomaya, B. Zhou","doi":"10.1109/CCGRID.2010.83","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.83","url":null,"abstract":"A primary driving force of the recent cloud computing paradigm is its inherent cost effectiveness. As in many basic utilities, such as electricity and water, consumers/clients in cloud computing environments are charged based on their service usage, hence the term ‘pay-per-use’. While this pricing model is very appealing for both service providers and consumers, fluctuating service request volume and conflicting objectives (e.g., profit vs. response time) between providers and consumers hinder its effective application to cloud computing environments. In this paper, we address the problem of service request scheduling in cloud computing systems. We consider a three-tier cloud structure, which consists of infrastructure vendors, service providers and consumers, the latter two parties are particular interest to us. Clearly, scheduling strategies in this scenario should satisfy the objectives of both parties. Our contributions include the development of a pricing model—using processor-sharing—for clouds, the application of this pricing model to composite services with dependency consideration (to the best of our knowledge, the work in this study is the first attempt), and the development of two sets of profit-driven scheduling algorithms.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126190026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GridFTP, designed using the Globus XIO framework, is one of the most popular methods in use to perform data transfers in the grid environment. But the performance of GridFTP in WAN is limited by the relatively low communication bandwidth offered by the existing network protocols. On the other hand, modern interconnects such as InfiniBand, with many advanced communication features like zero-copy protocol and RDMA operations, can greatly improve communication efficiency. In this paper, we take on the challenge of combining the ease of use of the Globus XIO framework and the high performance achieved through InfiniBand communication, thereby natively sup-porting GridFTP over InfiniBand based networks. The Advanced Data Transfer Service (ADTS), designed in our previous work, provides the low level InfiniBand support to the Globus XIO layer. We introduce the concepts of I/Ostaging in the Globus XIO ADTS driver to achieve efficient disk based data transfers. We evaluate our designs in both LAN and WAN environments using micro benchmarks as well as communication traces from several real world applications. We also provide insights into the communication performance with some in-depth analysis. Our experimental evaluation shows a performance improvement of up to100% for ADTS based data transfers as opposed to TCP or UDP based ones in LAN and high delay WAN scenarios.
{"title":"High Performance Data Transfer in Grid Environment Using GridFTP over InfiniBand","authors":"H. Subramoni, P. Lai, R. Kettimuthu, D. Panda","doi":"10.1109/CCGRID.2010.115","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.115","url":null,"abstract":"GridFTP, designed using the Globus XIO framework, is one of the most popular methods in use to perform data transfers in the grid environment. But the performance of GridFTP in WAN is limited by the relatively low communication bandwidth offered by the existing network protocols. On the other hand, modern interconnects such as InfiniBand, with many advanced communication features like zero-copy protocol and RDMA operations, can greatly improve communication efficiency. In this paper, we take on the challenge of combining the ease of use of the Globus XIO framework and the high performance achieved through InfiniBand communication, thereby natively sup-porting GridFTP over InfiniBand based networks. The Advanced Data Transfer Service (ADTS), designed in our previous work, provides the low level InfiniBand support to the Globus XIO layer. We introduce the concepts of I/Ostaging in the Globus XIO ADTS driver to achieve efficient disk based data transfers. We evaluate our designs in both LAN and WAN environments using micro benchmarks as well as communication traces from several real world applications. We also provide insights into the communication performance with some in-depth analysis. Our experimental evaluation shows a performance improvement of up to100% for ADTS based data transfers as opposed to TCP or UDP based ones in LAN and high delay WAN scenarios.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127132689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SciCloud is a project studying the scope of establishing private clouds at universities. With these clouds, researchers can efficiently use the already existing resources in solving computationally intensive scientific, mathematical, and academic problems. The project established a Eucalyptus based private cloud and developed several customized images that can be used in solving problems from mobile web services, distributed computing and bio-informatics domains. The poster demonstrates the SciCloud and reveals two applications that are benefiting from the setup along with our research scope and results in scientific computing.
{"title":"SciCloud: Scientific Computing on the Cloud","authors":"S. Srirama, Oleg Batrashev, E. Vainikko","doi":"10.1109/CCGRID.2010.56","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.56","url":null,"abstract":"SciCloud is a project studying the scope of establishing private clouds at universities. With these clouds, researchers can efficiently use the already existing resources in solving computationally intensive scientific, mathematical, and academic problems. The project established a Eucalyptus based private cloud and developed several customized images that can be used in solving problems from mobile web services, distributed computing and bio-informatics domains. The poster demonstrates the SciCloud and reveals two applications that are benefiting from the setup along with our research scope and results in scientific computing.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129478608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}