S. Camarasu-Pop, F. Cervenansky, Yonny Cardenas, Jean-Yves Nief, H. Benoit-Cattin
Medical imaging research deals with large, heterogeneous and fragmented amounts of medical images. The need for secure, federated and functional medical image databases is very strong within these research communities. This paper provides an overview of the different projects concerned with building medical image databases for medical imaging research. It also discusses the characteristics and requirements of this community and tries to determine to what extent existing solutions can answer these specific requirements.
{"title":"Overview of Medical Data Management Solutions for Research Communities","authors":"S. Camarasu-Pop, F. Cervenansky, Yonny Cardenas, Jean-Yves Nief, H. Benoit-Cattin","doi":"10.1109/CCGRID.2010.55","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.55","url":null,"abstract":"Medical imaging research deals with large, heterogeneous and fragmented amounts of medical images. The need for secure, federated and functional medical image databases is very strong within these research communities. This paper provides an overview of the different projects concerned with building medical image databases for medical imaging research. It also discusses the characteristics and requirements of this community and tries to determine to what extent existing solutions can answer these specific requirements.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115165032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault tolerance will be a fundamental imperative in the next decade as machines containing hundreds of thousands of cores will be installed at various locations. In this context, the traditional checkpoint/restart model does not seem to be a suitable option, since it makes all the processors roll back to their latest checkpoint in case of a single failure in one of the processors. In-memory message logging is an alternative that avoids this global restoration process and instead replays the messages to the failed processor. However, there is a large memory overhead associated with message logging because each message must be logged so it can be played back if a failure occurs. In this paper, we introduce a technique to alleviate the demand of memory in message logging by grouping processors into teams. These teams act as a failure unit: if one team member fails, all the other members in that team roll back to their latest checkpoint and start the recovery process. This eliminates the need to log message contents within teams. The savings in memory produced by this approach depend on the characteristics of the application, the number of messages sent per computation unit and size of those messages. We present promising results for multiple benchmarks. As an example, the NPB-CG code running class D on 512 cores manages to reduce the memory overhead of message logging by 62%.
{"title":"Team-Based Message Logging: Preliminary Results","authors":"Esteban Meneses, C. Mendes, L. Kalé","doi":"10.1109/CCGRID.2010.110","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.110","url":null,"abstract":"Fault tolerance will be a fundamental imperative in the next decade as machines containing hundreds of thousands of cores will be installed at various locations. In this context, the traditional checkpoint/restart model does not seem to be a suitable option, since it makes all the processors roll back to their latest checkpoint in case of a single failure in one of the processors. In-memory message logging is an alternative that avoids this global restoration process and instead replays the messages to the failed processor. However, there is a large memory overhead associated with message logging because each message must be logged so it can be played back if a failure occurs. In this paper, we introduce a technique to alleviate the demand of memory in message logging by grouping processors into teams. These teams act as a failure unit: if one team member fails, all the other members in that team roll back to their latest checkpoint and start the recovery process. This eliminates the need to log message contents within teams. The savings in memory produced by this approach depend on the characteristics of the application, the number of messages sent per computation unit and size of those messages. We present promising results for multiple benchmarks. As an example, the NPB-CG code running class D on 512 cores manages to reduce the memory overhead of message logging by 62%.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116444139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The diversity of job characteristics such as unstructured/unorganized arrival of jobs and priorities, could lead to inefficient resource allocation. Therefore, the characterization of jobs is an important aspect worthy of investigation, which enables judicious resource allocation decisions achieving two goals (performance and utilization) and improves resource availability.
{"title":"Dynamic Job-Clustering with Different Computing Priorities for Computational Resource Allocation","authors":"M. Hussin, Young Choon Lee, Albert Y. Zomaya","doi":"10.1109/CCGRID.2010.119","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.119","url":null,"abstract":"The diversity of job characteristics such as unstructured/unorganized arrival of jobs and priorities, could lead to inefficient resource allocation. Therefore, the characterization of jobs is an important aspect worthy of investigation, which enables judicious resource allocation decisions achieving two goals (performance and utilization) and improves resource availability.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122293924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While volunteer computing (VC) systems reach the most powerful computing platforms, they still have the problem of guaranteeing computational correctness, due to the inherent unreliability of volunteer participants. Spot-checking technique, which checks each participant by allocating spotter jobs, is a promising approach to the validation of computation results. The current spot-checking technique and associated sabotage-tolerance methods are based on the implicit assumption that participants never detect the allocation of spotter jobs, however generating such spotter jobs is still an open problem. Hence, in the real VC environment where the implicit assumption does not always hold, spot-checking-based sabotage-tolerance methods (such as well-known credibility-based voting) become almost impossible to guarantee the computational correctness. In this paper, we generalize the spot-checking technique by introducing the idea of imperfect checking. Using our new technique, it becomes possible to estimate the correct credibility for participant nodes even if they may detect spotter jobs. Moreover, by the idea of imperfect checking, we propose a new credibility-based voting which does not need to allocate spotter jobs. Simulation results show that the proposed method reduces the computation time compared to the original credibility-based voting, while guaranteeing the same level of computational correctness.
{"title":"Generalized Spot-Checking for Sabotage-Tolerance in Volunteer Computing Systems","authors":"Kanno Watanabe, Masaru Fukushi","doi":"10.1109/CCGRID.2010.97","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.97","url":null,"abstract":"While volunteer computing (VC) systems reach the most powerful computing platforms, they still have the problem of guaranteeing computational correctness, due to the inherent unreliability of volunteer participants. Spot-checking technique, which checks each participant by allocating spotter jobs, is a promising approach to the validation of computation results. The current spot-checking technique and associated sabotage-tolerance methods are based on the implicit assumption that participants never detect the allocation of spotter jobs, however generating such spotter jobs is still an open problem. Hence, in the real VC environment where the implicit assumption does not always hold, spot-checking-based sabotage-tolerance methods (such as well-known credibility-based voting) become almost impossible to guarantee the computational correctness. In this paper, we generalize the spot-checking technique by introducing the idea of imperfect checking. Using our new technique, it becomes possible to estimate the correct credibility for participant nodes even if they may detect spotter jobs. Moreover, by the idea of imperfect checking, we propose a new credibility-based voting which does not need to allocate spotter jobs. Simulation results show that the proposed method reduces the computation time compared to the original credibility-based voting, while guaranteeing the same level of computational correctness.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Gillan, T. Steinke, J. Bock, S. Borchert, I. Spence, N. Scott
Although the need for heterogeneous chips in high performance numerical computing was identified by Chillemi and co-authors in 2001 it is only over the past five years that it has emerged as the new frontier for HPC. In this environment one or more accelerators works symbiotically, on each node, with a multi-core CPU. Two such accelerator technologies are FPGA and GPU each of which works with instruction level parallelism. This paper provides a case study on implementing one computational algorithm on each of these heterogeneous environments. The algorithm is the evaluation of two electron integrals using direct numerical quadrature and is drawn from atomic physics. The results of the study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.
{"title":"Programming Challenges for the Implementation of Numerical Quadrature in Atomic Physics on FPGA and GPU Accelerators","authors":"C. Gillan, T. Steinke, J. Bock, S. Borchert, I. Spence, N. Scott","doi":"10.1109/CCGRID.2010.30","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.30","url":null,"abstract":"Although the need for heterogeneous chips in high performance numerical computing was identified by Chillemi and co-authors in 2001 it is only over the past five years that it has emerged as the new frontier for HPC. In this environment one or more accelerators works symbiotically, on each node, with a multi-core CPU. Two such accelerator technologies are FPGA and GPU each of which works with instruction level parallelism. This paper provides a case study on implementing one computational algorithm on each of these heterogeneous environments. The algorithm is the evaluation of two electron integrals using direct numerical quadrature and is drawn from atomic physics. The results of the study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"417 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132625853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Caan, F. Vos, A. V. Kampen, S. Olabarriaga, L. Vliet
Diffusion Tensor MRI (DTI) is a rather recent image acquisition modality that can help identify disease processes in nerve bundles in the brain. Due to the large and complex nature of such data, its analysis requires new and sophisticated pipelines that are more efficiently executed within a grid environment. We present our progress over the past four years in the development and porting of the DTI analysis pipeline to grids. Starting with simple jobs submitted from the command-line, we moved towards a workflow-based implementation and finally into a web service that can be accessed via web browsers by end-users. The analysis algorithms evolved from basic to state-of-the-art, currently enabling the automatic calculation of a population-specific ‘atlas’ where even complex brain regions are described in an anatomically correct way. Performance statistics show a clear improvement over the years, representing a mutual benefit from both a technology push and application pull.
{"title":"Gridifying a Diffusion Tensor Imaging Analysis Pipeline","authors":"M. Caan, F. Vos, A. V. Kampen, S. Olabarriaga, L. Vliet","doi":"10.1109/CCGRID.2010.99","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.99","url":null,"abstract":"Diffusion Tensor MRI (DTI) is a rather recent image acquisition modality that can help identify disease processes in nerve bundles in the brain. Due to the large and complex nature of such data, its analysis requires new and sophisticated pipelines that are more efficiently executed within a grid environment. We present our progress over the past four years in the development and porting of the DTI analysis pipeline to grids. Starting with simple jobs submitted from the command-line, we moved towards a workflow-based implementation and finally into a web service that can be accessed via web browsers by end-users. The analysis algorithms evolved from basic to state-of-the-art, currently enabling the automatic calculation of a population-specific ‘atlas’ where even complex brain regions are described in an anatomically correct way. Performance statistics show a clear improvement over the years, representing a mutual benefit from both a technology push and application pull.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133370876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yvonne Bernard, Lukas Klejnowski, J. Hähner, C. Müller-Schloer
The Organic Computing (OC) Initiative deals with technical systems, that consist of a large number of distributed and highly interconnected subsystems. In such systems, it is impossible for a designer to foresee all possible system configurations and to plan an appropriate system behaviour completely at design time. The aim is to endow such technical systems with the so-called self-X properties, such as self-organisation, self-configuration or self-healing. In such dynamic systems, trust is an important prerequisite to enable the usage of Organic Computing systems and algorithms in market-ready products in the future. The OC-Trust project aims at introducing trust mechanisms to improve and assure the interoperability of subsystems. In this paper, we deal with aspects of organic systems regarding trustworthiness on the subsystem level (agents) in a desktop grid system. We develop an agent-based simulation of a desktop grid to show, that the introduction of trust concepts improves the system's performance, in such that they speed up the processes on the agent level. Specifically, we investigate a bottom-up self-organised development of trust structures that create coalition groups of agents that work more efficiently than standard algorithms. Here, an agent can determine individually to what extent it belongs to a Trusted Community.
{"title":"Towards Trust in Desktop Grid Systems","authors":"Yvonne Bernard, Lukas Klejnowski, J. Hähner, C. Müller-Schloer","doi":"10.1109/CCGRID.2010.73","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.73","url":null,"abstract":"The Organic Computing (OC) Initiative deals with technical systems, that consist of a large number of distributed and highly interconnected subsystems. In such systems, it is impossible for a designer to foresee all possible system configurations and to plan an appropriate system behaviour completely at design time. The aim is to endow such technical systems with the so-called self-X properties, such as self-organisation, self-configuration or self-healing. In such dynamic systems, trust is an important prerequisite to enable the usage of Organic Computing systems and algorithms in market-ready products in the future. The OC-Trust project aims at introducing trust mechanisms to improve and assure the interoperability of subsystems. In this paper, we deal with aspects of organic systems regarding trustworthiness on the subsystem level (agents) in a desktop grid system. We develop an agent-based simulation of a desktop grid to show, that the introduction of trust concepts improves the system's performance, in such that they speed up the processes on the agent level. Specifically, we investigate a bottom-up self-organised development of trust structures that create coalition groups of agents that work more efficiently than standard algorithms. Here, an agent can determine individually to what extent it belongs to a Trusted Community.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133664961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.
{"title":"Decentralized Resource Availability Prediction for a Desktop Grid","authors":"Karthick Ramachandran, H. Lutfiyya, M. Perry","doi":"10.1109/CCGRID.2010.54","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.54","url":null,"abstract":"In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130040724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A seamless and intuitive data reduction capability for the vast amount of scientific metadata generated by experiments is critical to ensure effective use of the data by domain specific scientists. The portal environments and scientific gateways currently used by scientists provide search capability that is limited to the pre-defined pull-down menus and conditions set in the portal interface. Currently, data reduction can only be effectively achieved by scientists who have developed expertise in dealing with complex and disparate query languages. A common theme in our discussions with scientists is that data reduction capability, similar to web search in terms of ease-of-use, scalability, and freshness/accuracy of results, is a critical need that can greatly enhance the productivity and quality of scientific research. Most existing search tools are designed for exact string matching, but such matches are highly unlikely given the nature of metadata produced by instruments and a user’s inability to recall exact numbers to search in very large datasets. This paper presents research to locate metadata of interest within a range of values. To meet this goal, we leverage the use of XML in metadata description for scientific datasets, specifically the NeXus datasets generated by the SNS scientists. We have designed a scalable indexing structure for processing data reduction queries. Web semantics and ontology based methodologies are also employed to provide an elegant, intuitive, and powerful free-form query based data reduction interface to end users.
{"title":"Framework for Efficient Indexing and Searching of Scientific Metadata","authors":"Chaitali Gupta, M. Govindaraju","doi":"10.1109/CCGRID.2010.120","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.120","url":null,"abstract":"A seamless and intuitive data reduction capability for the vast amount of scientific metadata generated by experiments is critical to ensure effective use of the data by domain specific scientists. The portal environments and scientific gateways currently used by scientists provide search capability that is limited to the pre-defined pull-down menus and conditions set in the portal interface. Currently, data reduction can only be effectively achieved by scientists who have developed expertise in dealing with complex and disparate query languages. A common theme in our discussions with scientists is that data reduction capability, similar to web search in terms of ease-of-use, scalability, and freshness/accuracy of results, is a critical need that can greatly enhance the productivity and quality of scientific research. Most existing search tools are designed for exact string matching, but such matches are highly unlikely given the nature of metadata produced by instruments and a user’s inability to recall exact numbers to search in very large datasets. This paper presents research to locate metadata of interest within a range of values. To meet this goal, we leverage the use of XML in metadata description for scientific datasets, specifically the NeXus datasets generated by the SNS scientists. We have designed a scalable indexing structure for processing data reduction queries. Web semantics and ontology based methodologies are also employed to provide an elegant, intuitive, and powerful free-form query based data reduction interface to end users.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132042546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Previously, we developed our StreamComponents framework which uses distributed components and web services to facilitate control, reconfiguration and deployment of streams on both local clusters, and remote cloud infrastructure. Our stream evaluation semantics are fundamentally demand driven, a conservative view that ensures no unnecessary computation, supports flexible structures such as cyclic networks and infinite streams, and facilitates resource management. Abstract In this paper, we focus on the evaluation semantics of our stream model, and explore circumstances under which more eager evaluation is desirable, whilst retaining the fundamental semantics. We introduce the Indirected Asynchronous Method pattern (IAM), which makes novel use of futures and auto-continuations, to facilitate fully asynchronous demand propagation leading to more eager evaluation of the streams. We present an evaluation of the model on both cluster and cloud infrastructure showing that very useful amounts of pipelining parallelism can be achieved.
{"title":"Representing Eager Evaluation in a Demand Driven Model of Streams on Cloud Infrastructure","authors":"P. Martinaitis, A. Wendelborn","doi":"10.1109/CCGRID.2010.88","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.88","url":null,"abstract":"Previously, we developed our StreamComponents framework which uses distributed components and web services to facilitate control, reconfiguration and deployment of streams on both local clusters, and remote cloud infrastructure. Our stream evaluation semantics are fundamentally demand driven, a conservative view that ensures no unnecessary computation, supports flexible structures such as cyclic networks and infinite streams, and facilitates resource management. Abstract In this paper, we focus on the evaluation semantics of our stream model, and explore circumstances under which more eager evaluation is desirable, whilst retaining the fundamental semantics. We introduce the Indirected Asynchronous Method pattern (IAM), which makes novel use of futures and auto-continuations, to facilitate fully asynchronous demand propagation leading to more eager evaluation of the streams. We present an evaluation of the model on both cluster and cloud infrastructure showing that very useful amounts of pipelining parallelism can be achieved.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132108300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}