首页 > 最新文献

2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing最新文献

英文 中文
Overview of Medical Data Management Solutions for Research Communities 研究社区医疗数据管理解决方案概述
S. Camarasu-Pop, F. Cervenansky, Yonny Cardenas, Jean-Yves Nief, H. Benoit-Cattin
Medical imaging research deals with large, heterogeneous and fragmented amounts of medical images. The need for secure, federated and functional medical image databases is very strong within these research communities. This paper provides an overview of the different projects concerned with building medical image databases for medical imaging research. It also discusses the characteristics and requirements of this community and tries to determine to what extent existing solutions can answer these specific requirements.
医学成像研究涉及大量、异构和碎片化的医学图像。在这些研究团体中,对安全、联合和功能强大的医学图像数据库的需求非常强烈。本文概述了为医学影像学研究建立医学图像数据库所涉及的不同项目。它还讨论了这个社区的特点和需求,并试图确定现有的解决方案在多大程度上可以满足这些特定的需求。
{"title":"Overview of Medical Data Management Solutions for Research Communities","authors":"S. Camarasu-Pop, F. Cervenansky, Yonny Cardenas, Jean-Yves Nief, H. Benoit-Cattin","doi":"10.1109/CCGRID.2010.55","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.55","url":null,"abstract":"Medical imaging research deals with large, heterogeneous and fragmented amounts of medical images. The need for secure, federated and functional medical image databases is very strong within these research communities. This paper provides an overview of the different projects concerned with building medical image databases for medical imaging research. It also discusses the characteristics and requirements of this community and tries to determine to what extent existing solutions can answer these specific requirements.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115165032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Team-Based Message Logging: Preliminary Results 基于团队的消息记录:初步结果
Esteban Meneses, C. Mendes, L. Kalé
Fault tolerance will be a fundamental imperative in the next decade as machines containing hundreds of thousands of cores will be installed at various locations. In this context, the traditional checkpoint/restart model does not seem to be a suitable option, since it makes all the processors roll back to their latest checkpoint in case of a single failure in one of the processors. In-memory message logging is an alternative that avoids this global restoration process and instead replays the messages to the failed processor. However, there is a large memory overhead associated with message logging because each message must be logged so it can be played back if a failure occurs. In this paper, we introduce a technique to alleviate the demand of memory in message logging by grouping processors into teams. These teams act as a failure unit: if one team member fails, all the other members in that team roll back to their latest checkpoint and start the recovery process. This eliminates the need to log message contents within teams. The savings in memory produced by this approach depend on the characteristics of the application, the number of messages sent per computation unit and size of those messages. We present promising results for multiple benchmarks. As an example, the NPB-CG code running class D on 512 cores manages to reduce the memory overhead of message logging by 62%.
在未来十年,容错将是一个基本的必要条件,因为包含数十万个核心的机器将被安装在不同的地方。在这种情况下,传统的检查点/重新启动模型似乎不是一个合适的选择,因为如果其中一个处理器出现单个故障,它会使所有处理器回滚到最新的检查点。内存中消息日志记录是一种替代方法,它可以避免这种全局恢复过程,而是将消息重放到故障处理器。但是,与消息日志记录相关的内存开销很大,因为必须记录每条消息,以便在发生故障时回放。在本文中,我们介绍了一种通过分组处理器来减少消息日志对内存需求的技术。这些团队充当一个故障单元:如果一个团队成员失败,该团队中的所有其他成员回滚到他们最近的检查点并开始恢复过程。这消除了在团队中记录消息内容的需要。这种方法所节省的内存取决于应用程序的特征、每个计算单元发送的消息数量以及这些消息的大小。我们在多个基准测试中展示了有希望的结果。例如,在512核上运行类D的NPB-CG代码设法将消息日志的内存开销减少62%。
{"title":"Team-Based Message Logging: Preliminary Results","authors":"Esteban Meneses, C. Mendes, L. Kalé","doi":"10.1109/CCGRID.2010.110","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.110","url":null,"abstract":"Fault tolerance will be a fundamental imperative in the next decade as machines containing hundreds of thousands of cores will be installed at various locations. In this context, the traditional checkpoint/restart model does not seem to be a suitable option, since it makes all the processors roll back to their latest checkpoint in case of a single failure in one of the processors. In-memory message logging is an alternative that avoids this global restoration process and instead replays the messages to the failed processor. However, there is a large memory overhead associated with message logging because each message must be logged so it can be played back if a failure occurs. In this paper, we introduce a technique to alleviate the demand of memory in message logging by grouping processors into teams. These teams act as a failure unit: if one team member fails, all the other members in that team roll back to their latest checkpoint and start the recovery process. This eliminates the need to log message contents within teams. The savings in memory produced by this approach depend on the characteristics of the application, the number of messages sent per computation unit and size of those messages. We present promising results for multiple benchmarks. As an example, the NPB-CG code running class D on 512 cores manages to reduce the memory overhead of message logging by 62%.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116444139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Dynamic Job-Clustering with Different Computing Priorities for Computational Resource Allocation 不同计算优先级的动态作业聚类计算资源分配
M. Hussin, Young Choon Lee, Albert Y. Zomaya
The diversity of job characteristics such as unstructured/unorganized arrival of jobs and priorities, could lead to inefficient resource allocation. Therefore, the characterization of jobs is an important aspect worthy of investigation, which enables judicious resource allocation decisions achieving two goals (performance and utilization) and improves resource availability.
工作特点的多样性,例如工作的到来没有结构/没有组织和优先次序,可能导致资源分配效率低下。因此,作业的特征是一个值得研究的重要方面,它使明智的资源分配决策能够实现两个目标(性能和利用率),并提高资源可用性。
{"title":"Dynamic Job-Clustering with Different Computing Priorities for Computational Resource Allocation","authors":"M. Hussin, Young Choon Lee, Albert Y. Zomaya","doi":"10.1109/CCGRID.2010.119","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.119","url":null,"abstract":"The diversity of job characteristics such as unstructured/unorganized arrival of jobs and priorities, could lead to inefficient resource allocation. Therefore, the characterization of jobs is an important aspect worthy of investigation, which enables judicious resource allocation decisions achieving two goals (performance and utilization) and improves resource availability.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122293924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Generalized Spot-Checking for Sabotage-Tolerance in Volunteer Computing Systems 志愿计算系统中破坏容忍度的广义抽查
Kanno Watanabe, Masaru Fukushi
While volunteer computing (VC) systems reach the most powerful computing platforms, they still have the problem of guaranteeing computational correctness, due to the inherent unreliability of volunteer participants. Spot-checking technique, which checks each participant by allocating spotter jobs, is a promising approach to the validation of computation results. The current spot-checking technique and associated sabotage-tolerance methods are based on the implicit assumption that participants never detect the allocation of spotter jobs, however generating such spotter jobs is still an open problem. Hence, in the real VC environment where the implicit assumption does not always hold, spot-checking-based sabotage-tolerance methods (such as well-known credibility-based voting) become almost impossible to guarantee the computational correctness. In this paper, we generalize the spot-checking technique by introducing the idea of imperfect checking. Using our new technique, it becomes possible to estimate the correct credibility for participant nodes even if they may detect spotter jobs. Moreover, by the idea of imperfect checking, we propose a new credibility-based voting which does not need to allocate spotter jobs. Simulation results show that the proposed method reduces the computation time compared to the original credibility-based voting, while guaranteeing the same level of computational correctness.
虽然志愿计算(VC)系统达到了最强大的计算平台,但由于志愿者参与者固有的不可靠性,它们仍然存在保证计算正确性的问题。点检技术是一种很有前途的验证计算结果的方法,它通过分配点检工作来检查每个参与者。目前的抽查技术和相关的破坏容忍方法是基于隐性假设,即参与者从未检测到监视工作的分配,然而,产生这样的监视工作仍然是一个悬而未决的问题。因此,在真实的VC环境中,隐式假设并不总是成立,基于抽查的破坏容忍方法(例如众所周知的基于可信度的投票)几乎不可能保证计算的正确性。本文通过引入不完全检验的概念,对抽查技术进行了推广。使用我们的新技术,即使参与者节点可能检测到观测者工作,也可以估计参与者节点的正确可信度。此外,根据不完全检查的思想,我们提出了一种新的基于可信度的投票,该投票不需要分配观察员工作。仿真结果表明,与原有的基于可信度的投票相比,该方法在保证相同计算正确性的前提下减少了计算时间。
{"title":"Generalized Spot-Checking for Sabotage-Tolerance in Volunteer Computing Systems","authors":"Kanno Watanabe, Masaru Fukushi","doi":"10.1109/CCGRID.2010.97","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.97","url":null,"abstract":"While volunteer computing (VC) systems reach the most powerful computing platforms, they still have the problem of guaranteeing computational correctness, due to the inherent unreliability of volunteer participants. Spot-checking technique, which checks each participant by allocating spotter jobs, is a promising approach to the validation of computation results. The current spot-checking technique and associated sabotage-tolerance methods are based on the implicit assumption that participants never detect the allocation of spotter jobs, however generating such spotter jobs is still an open problem. Hence, in the real VC environment where the implicit assumption does not always hold, spot-checking-based sabotage-tolerance methods (such as well-known credibility-based voting) become almost impossible to guarantee the computational correctness. In this paper, we generalize the spot-checking technique by introducing the idea of imperfect checking. Using our new technique, it becomes possible to estimate the correct credibility for participant nodes even if they may detect spotter jobs. Moreover, by the idea of imperfect checking, we propose a new credibility-based voting which does not need to allocate spotter jobs. Simulation results show that the proposed method reduces the computation time compared to the original credibility-based voting, while guaranteeing the same level of computational correctness.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132413779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Programming Challenges for the Implementation of Numerical Quadrature in Atomic Physics on FPGA and GPU Accelerators 在FPGA和GPU加速器上实现原子物理数值正交的编程挑战
C. Gillan, T. Steinke, J. Bock, S. Borchert, I. Spence, N. Scott
Although the need for heterogeneous chips in high performance numerical computing was identified by Chillemi and co-authors in 2001 it is only over the past five years that it has emerged as the new frontier for HPC. In this environment one or more accelerators works symbiotically, on each node, with a multi-core CPU. Two such accelerator technologies are FPGA and GPU each of which works with instruction level parallelism. This paper provides a case study on implementing one computational algorithm on each of these heterogeneous environments. The algorithm is the evaluation of two electron integrals using direct numerical quadrature and is drawn from atomic physics. The results of the study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.
尽管在高性能数值计算中对异构芯片的需求是由Chillemi和他的合作者在2001年确定的,但在过去的五年里,它才成为高性能计算的新前沿。在这种环境中,一个或多个加速器在每个节点上与一个多核CPU共生工作。两种这样的加速器技术是FPGA和GPU,它们都使用指令级并行性。本文提供了一个在这些异构环境中实现一种计算算法的案例研究。该算法是利用直接数值正交法求两个电子积分的算法,来源于原子物理学。研究结果表明,虽然每一种加速器都是可行的,但每一种加速器必须遵循的执行战略却有相当大的差异。
{"title":"Programming Challenges for the Implementation of Numerical Quadrature in Atomic Physics on FPGA and GPU Accelerators","authors":"C. Gillan, T. Steinke, J. Bock, S. Borchert, I. Spence, N. Scott","doi":"10.1109/CCGRID.2010.30","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.30","url":null,"abstract":"Although the need for heterogeneous chips in high performance numerical computing was identified by Chillemi and co-authors in 2001 it is only over the past five years that it has emerged as the new frontier for HPC. In this environment one or more accelerators works symbiotically, on each node, with a multi-core CPU. Two such accelerator technologies are FPGA and GPU each of which works with instruction level parallelism. This paper provides a case study on implementing one computational algorithm on each of these heterogeneous environments. The algorithm is the evaluation of two electron integrals using direct numerical quadrature and is drawn from atomic physics. The results of the study show that while each accelerator is viable, there are considerable differences in the implementation strategies that must be followed on each.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"417 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132625853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gridifying a Diffusion Tensor Imaging Analysis Pipeline 扩散张量成像分析管道的网格化
M. Caan, F. Vos, A. V. Kampen, S. Olabarriaga, L. Vliet
Diffusion Tensor MRI (DTI) is a rather recent image acquisition modality that can help identify disease processes in nerve bundles in the brain. Due to the large and complex nature of such data, its analysis requires new and sophisticated pipelines that are more efficiently executed within a grid environment. We present our progress over the past four years in the development and porting of the DTI analysis pipeline to grids. Starting with simple jobs submitted from the command-line, we moved towards a workflow-based implementation and finally into a web service that can be accessed via web browsers by end-users. The analysis algorithms evolved from basic to state-of-the-art, currently enabling the automatic calculation of a population-specific ‘atlas’ where even complex brain regions are described in an anatomically correct way. Performance statistics show a clear improvement over the years, representing a mutual benefit from both a technology push and application pull.
弥散张量MRI (DTI)是一种较新的图像采集方式,可以帮助识别大脑神经束的疾病过程。由于此类数据的庞大和复杂性质,其分析需要在网格环境中更有效地执行的新的和复杂的管道。我们介绍了过去四年在开发和移植DTI分析管道到电网方面的进展。从从命令行提交的简单作业开始,我们转向了基于工作流的实现,最后进入了最终用户可以通过web浏览器访问的web服务。分析算法从基本的发展到最先进的技术,目前能够自动计算特定人群的“图谱”,即使是复杂的大脑区域也能以解剖学上正确的方式描述。性能统计数据显示多年来有了明显的改善,这代表了技术推动和应用程序拉动的共同利益。
{"title":"Gridifying a Diffusion Tensor Imaging Analysis Pipeline","authors":"M. Caan, F. Vos, A. V. Kampen, S. Olabarriaga, L. Vliet","doi":"10.1109/CCGRID.2010.99","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.99","url":null,"abstract":"Diffusion Tensor MRI (DTI) is a rather recent image acquisition modality that can help identify disease processes in nerve bundles in the brain. Due to the large and complex nature of such data, its analysis requires new and sophisticated pipelines that are more efficiently executed within a grid environment. We present our progress over the past four years in the development and porting of the DTI analysis pipeline to grids. Starting with simple jobs submitted from the command-line, we moved towards a workflow-based implementation and finally into a web service that can be accessed via web browsers by end-users. The analysis algorithms evolved from basic to state-of-the-art, currently enabling the automatic calculation of a population-specific ‘atlas’ where even complex brain regions are described in an anatomically correct way. Performance statistics show a clear improvement over the years, representing a mutual benefit from both a technology push and application pull.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133370876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards Trust in Desktop Grid Systems 迈向桌面网格系统中的信任
Yvonne Bernard, Lukas Klejnowski, J. Hähner, C. Müller-Schloer
The Organic Computing (OC) Initiative deals with technical systems, that consist of a large number of distributed and highly interconnected subsystems. In such systems, it is impossible for a designer to foresee all possible system configurations and to plan an appropriate system behaviour completely at design time. The aim is to endow such technical systems with the so-called self-X properties, such as self-organisation, self-configuration or self-healing. In such dynamic systems, trust is an important prerequisite to enable the usage of Organic Computing systems and algorithms in market-ready products in the future. The OC-Trust project aims at introducing trust mechanisms to improve and assure the interoperability of subsystems. In this paper, we deal with aspects of organic systems regarding trustworthiness on the subsystem level (agents) in a desktop grid system. We develop an agent-based simulation of a desktop grid to show, that the introduction of trust concepts improves the system's performance, in such that they speed up the processes on the agent level. Specifically, we investigate a bottom-up self-organised development of trust structures that create coalition groups of agents that work more efficiently than standard algorithms. Here, an agent can determine individually to what extent it belongs to a Trusted Community.
有机计算(OC)计划处理由大量分布式和高度互联的子系统组成的技术系统。在这样的系统中,设计师不可能预见到所有可能的系统配置,并在设计时完全规划适当的系统行为。其目的是赋予这些技术系统所谓的自x属性,如自组织、自配置或自修复。在这样的动态系统中,信任是未来有机计算系统和算法在市场就绪产品中使用的重要先决条件。OC-Trust项目旨在引入信任机制,以改善和确保子系统的互操作性。本文讨论了桌面网格系统中有机系统在子系统(agent)层面的可信度问题。我们开发了一个基于代理的桌面网格模拟,以表明信任概念的引入提高了系统的性能,从而加快了代理级别的处理速度。具体而言,我们研究了自下而上的自组织信任结构的发展,该结构创建了比标准算法更有效的代理联盟组。在这里,代理可以单独确定它在多大程度上属于可信社区。
{"title":"Towards Trust in Desktop Grid Systems","authors":"Yvonne Bernard, Lukas Klejnowski, J. Hähner, C. Müller-Schloer","doi":"10.1109/CCGRID.2010.73","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.73","url":null,"abstract":"The Organic Computing (OC) Initiative deals with technical systems, that consist of a large number of distributed and highly interconnected subsystems. In such systems, it is impossible for a designer to foresee all possible system configurations and to plan an appropriate system behaviour completely at design time. The aim is to endow such technical systems with the so-called self-X properties, such as self-organisation, self-configuration or self-healing. In such dynamic systems, trust is an important prerequisite to enable the usage of Organic Computing systems and algorithms in market-ready products in the future. The OC-Trust project aims at introducing trust mechanisms to improve and assure the interoperability of subsystems. In this paper, we deal with aspects of organic systems regarding trustworthiness on the subsystem level (agents) in a desktop grid system. We develop an agent-based simulation of a desktop grid to show, that the introduction of trust concepts improves the system's performance, in such that they speed up the processes on the agent level. Specifically, we investigate a bottom-up self-organised development of trust structures that create coalition groups of agents that work more efficiently than standard algorithms. Here, an agent can determine individually to what extent it belongs to a Trusted Community.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133664961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Decentralized Resource Availability Prediction for a Desktop Grid 桌面网格的分散式资源可用性预测
Karthick Ramachandran, H. Lutfiyya, M. Perry
In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.
在桌面网格模型中,作业(计算任务)只有在资源空闲时才在资源中提交执行。如果桌面计算机被用于其他目的,则无法保证在资源中开始执行的作业将在不受用户活动(例如键盘敲击或鼠标移动)干扰的情况下完成其执行。在桌面网格的点对点(P2P)模型中,这个问题变得更具挑战性,因为桌面网格中没有决定将作业分配给特定资源的中央服务器。本文描述了一种利用资源可用性预测的P2P桌面网格框架。我们通过将作业提交到在给定时间有更高可用概率的机器上来改进系统的功能。我们对框架进行基准测试,并对结果进行分析。
{"title":"Decentralized Resource Availability Prediction for a Desktop Grid","authors":"Karthick Ramachandran, H. Lutfiyya, M. Perry","doi":"10.1109/CCGRID.2010.54","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.54","url":null,"abstract":"In a desktop grid model, the job (computational task) is submitted for execution in the resource only when the resource is idle. There is no guarantee that the job which has started to execute in a resource will complete its execution without any disruption from user activity (such as a keyboard stroke or mouse move) if the desktop machines are used for other purposes. This problem becomes more challenging in a Peer-to-Peer (P2P) model for a desktop grid where there is no central server that decides to allocate a job to a particular resource. This paper describes a P2P desktop grid framework that utilizes resource availability prediction. We improve the functionality of the system by submitting the jobs on machines that have a higher probability of being available at a given time. We benchmark our framework and provide an analysis of our results.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130040724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Framework for Efficient Indexing and Searching of Scientific Metadata 科学元数据高效索引与检索框架
Chaitali Gupta, M. Govindaraju
A seamless and intuitive data reduction capability for the vast amount of scientific metadata generated by experiments is critical to ensure effective use of the data by domain specific scientists. The portal environments and scientific gateways currently used by scientists provide search capability that is limited to the pre-defined pull-down menus and conditions set in the portal interface. Currently, data reduction can only be effectively achieved by scientists who have developed expertise in dealing with complex and disparate query languages. A common theme in our discussions with scientists is that data reduction capability, similar to web search in terms of ease-of-use, scalability, and freshness/accuracy of results, is a critical need that can greatly enhance the productivity and quality of scientific research. Most existing search tools are designed for exact string matching, but such matches are highly unlikely given the nature of metadata produced by instruments and a user’s inability to recall exact numbers to search in very large datasets. This paper presents research to locate metadata of interest within a range of values. To meet this goal, we leverage the use of XML in metadata description for scientific datasets, specifically the NeXus datasets generated by the SNS scientists. We have designed a scalable indexing structure for processing data reduction queries. Web semantics and ontology based methodologies are also employed to provide an elegant, intuitive, and powerful free-form query based data reduction interface to end users.
为实验产生的大量科学元数据提供无缝和直观的数据缩减能力对于确保特定领域科学家有效使用数据至关重要。科学家目前使用的门户环境和科学网关提供的搜索功能仅限于预定义的下拉菜单和门户界面中设置的条件。目前,只有在处理复杂和不同查询语言方面具有专业知识的科学家才能有效地实现数据约简。在我们与科学家的讨论中,一个共同的主题是数据简化能力,类似于在易用性、可扩展性和结果的新鲜度/准确性方面的网络搜索,是一个可以大大提高科学研究的生产力和质量的关键需求。大多数现有的搜索工具都是为精确的字符串匹配而设计的,但是考虑到仪器产生的元数据的性质以及用户无法回忆起在非常大的数据集中搜索的精确数字,这种匹配是极不可能的。本文提出了在一系列值中定位感兴趣的元数据的研究。为了实现这一目标,我们在科学数据集的元数据描述中利用XML,特别是由SNS科学家生成的NeXus数据集。我们设计了一个可伸缩的索引结构来处理数据约简查询。还使用Web语义和基于本体的方法为最终用户提供优雅、直观和强大的基于自由格式查询的数据简化接口。
{"title":"Framework for Efficient Indexing and Searching of Scientific Metadata","authors":"Chaitali Gupta, M. Govindaraju","doi":"10.1109/CCGRID.2010.120","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.120","url":null,"abstract":"A seamless and intuitive data reduction capability for the vast amount of scientific metadata generated by experiments is critical to ensure effective use of the data by domain specific scientists. The portal environments and scientific gateways currently used by scientists provide search capability that is limited to the pre-defined pull-down menus and conditions set in the portal interface. Currently, data reduction can only be effectively achieved by scientists who have developed expertise in dealing with complex and disparate query languages. A common theme in our discussions with scientists is that data reduction capability, similar to web search in terms of ease-of-use, scalability, and freshness/accuracy of results, is a critical need that can greatly enhance the productivity and quality of scientific research. Most existing search tools are designed for exact string matching, but such matches are highly unlikely given the nature of metadata produced by instruments and a user’s inability to recall exact numbers to search in very large datasets. This paper presents research to locate metadata of interest within a range of values. To meet this goal, we leverage the use of XML in metadata description for scientific datasets, specifically the NeXus datasets generated by the SNS scientists. We have designed a scalable indexing structure for processing data reduction queries. Web semantics and ontology based methodologies are also employed to provide an elegant, intuitive, and powerful free-form query based data reduction interface to end users.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132042546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Representing Eager Evaluation in a Demand Driven Model of Streams on Cloud Infrastructure 在云基础设施流的需求驱动模型中表示急切评估
P. Martinaitis, A. Wendelborn
Previously, we developed our StreamComponents framework which uses distributed components and web services to facilitate control, reconfiguration and deployment of streams on both local clusters, and remote cloud infrastructure. Our stream evaluation semantics are fundamentally demand driven, a conservative view that ensures no unnecessary computation, supports flexible structures such as cyclic networks and infinite streams, and facilitates resource management. Abstract In this paper, we focus on the evaluation semantics of our stream model, and explore circumstances under which more eager evaluation is desirable, whilst retaining the fundamental semantics. We introduce the Indirected Asynchronous Method pattern (IAM), which makes novel use of futures and auto-continuations, to facilitate fully asynchronous demand propagation leading to more eager evaluation of the streams. We present an evaluation of the model on both cluster and cloud infrastructure showing that very useful amounts of pipelining parallelism can be achieved.
以前,我们开发了StreamComponents框架,它使用分布式组件和web服务来促进本地集群和远程云基础设施上流的控制、重新配置和部署。我们的流计算语义基本上是需求驱动的,一个保守的观点,确保没有不必要的计算,支持灵活的结构,如循环网络和无限流,并便于资源管理。在本文中,我们关注我们的流模型的评估语义,并探索在保留基本语义的情况下,需要更迫切的评估。我们引入了间接异步方法模式(IAM),它新颖地使用了期货和自动延续,以促进完全异步的需求传播,从而对流进行更迫切的评估。我们在集群和云基础设施上对该模型进行了评估,表明可以实现非常有用的管道并行性。
{"title":"Representing Eager Evaluation in a Demand Driven Model of Streams on Cloud Infrastructure","authors":"P. Martinaitis, A. Wendelborn","doi":"10.1109/CCGRID.2010.88","DOIUrl":"https://doi.org/10.1109/CCGRID.2010.88","url":null,"abstract":"Previously, we developed our StreamComponents framework which uses distributed components and web services to facilitate control, reconfiguration and deployment of streams on both local clusters, and remote cloud infrastructure. Our stream evaluation semantics are fundamentally demand driven, a conservative view that ensures no unnecessary computation, supports flexible structures such as cyclic networks and infinite streams, and facilitates resource management. Abstract In this paper, we focus on the evaluation semantics of our stream model, and explore circumstances under which more eager evaluation is desirable, whilst retaining the fundamental semantics. We introduce the Indirected Asynchronous Method pattern (IAM), which makes novel use of futures and auto-continuations, to facilitate fully asynchronous demand propagation leading to more eager evaluation of the streams. We present an evaluation of the model on both cluster and cloud infrastructure showing that very useful amounts of pipelining parallelism can be achieved.","PeriodicalId":444485,"journal":{"name":"2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132108300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1