Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470798
Han Zhao, Xiaolin Li
We propose a suite of market-oriented task scheduling algorithms to build an AuctionNet for heterogeneous distributed environments. In heterogeneous distributed environments, computing nodes are autonomous and owned by different organizations, for example peer-to-peer systems, desktop grids/clouds. To address such diverse heterogeneity and dynamism in systems, applications, and local policies, efficient and fair task scheduling becomes a challenging issue. To cope with such complexity in a distributed and noncooperative environment, we propose to use market-oriented incentive mechanisms to regulate task scheduling in a distributed manner. Further, to accommodate multiple objectives and criteria, we adopt a combined approach leveraging the advantage of both hypergraph theory and incentive mechanisms. We first formulate a general framework of market-oriented task scheduling in distributed systems. We then present two algorithms for task-bundle scheduling. Preliminary results demonstrate the satisfactory performance of our proposed algorithms. The remaining work to complete the PhD dissertation is then presented. The proposed research carries significant intellectual merits and potential broader impacts in the following aspects. (1) We propose the notion of task-bundle for the first time in the literature. Product-bundle has been a common marketing strategy in our daily life for a long time. In the emerging commercial clouds and desktop clouds, task-bundle could be a useful concept for computing and storage markets. (2) We propose efficient distributed mechanisms that are very suitable for such distributed systems. A novel algorithm combining hypergraph and incentive mechanisms achieves multi-objective optimization. (3) We conduct rigorous analytical study and prove that our algorithms ensure efficiency and fairness and in the meantime maximize social welfare. (4) Overall, this proposal lays a solid foundation and sheds light on future research and realworld applications in the broad area of task scheduling in distributed systems.
{"title":"AuctionNet: Market oriented task scheduling in heterogeneous distributed environments","authors":"Han Zhao, Xiaolin Li","doi":"10.1109/IPDPSW.2010.5470798","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470798","url":null,"abstract":"We propose a suite of market-oriented task scheduling algorithms to build an AuctionNet for heterogeneous distributed environments. In heterogeneous distributed environments, computing nodes are autonomous and owned by different organizations, for example peer-to-peer systems, desktop grids/clouds. To address such diverse heterogeneity and dynamism in systems, applications, and local policies, efficient and fair task scheduling becomes a challenging issue. To cope with such complexity in a distributed and noncooperative environment, we propose to use market-oriented incentive mechanisms to regulate task scheduling in a distributed manner. Further, to accommodate multiple objectives and criteria, we adopt a combined approach leveraging the advantage of both hypergraph theory and incentive mechanisms. We first formulate a general framework of market-oriented task scheduling in distributed systems. We then present two algorithms for task-bundle scheduling. Preliminary results demonstrate the satisfactory performance of our proposed algorithms. The remaining work to complete the PhD dissertation is then presented. The proposed research carries significant intellectual merits and potential broader impacts in the following aspects. (1) We propose the notion of task-bundle for the first time in the literature. Product-bundle has been a common marketing strategy in our daily life for a long time. In the emerging commercial clouds and desktop clouds, task-bundle could be a useful concept for computing and storage markets. (2) We propose efficient distributed mechanisms that are very suitable for such distributed systems. A novel algorithm combining hypergraph and incentive mechanisms achieves multi-objective optimization. (3) We conduct rigorous analytical study and prove that our algorithms ensure efficiency and fairness and in the meantime maximize social welfare. (4) Overall, this proposal lays a solid foundation and sheds light on future research and realworld applications in the broad area of task scheduling in distributed systems.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116441705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470690
Handong Ye, R. Pavel, A. Landwehr, G. Gao
Operating Systems (OSs) have been considered as a cornerstone of the modern computer system, and the conventional operating system model targets computers designed around the sequential execution model. However, with the rapid progress of the multi-core/manycore technologies, we argue that OSes must be adapted to the underlying hardware platform to fully exploit parallelism. To illustrate this, our paper reports a study on how to perform such an adaptation for the IBM BlueGene/P multi-core system.
{"title":"TiNy threads on BlueGene/P: Exploring many-core parallelisms beyond The traditional OS","authors":"Handong Ye, R. Pavel, A. Landwehr, G. Gao","doi":"10.1109/IPDPSW.2010.5470690","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470690","url":null,"abstract":"Operating Systems (OSs) have been considered as a cornerstone of the modern computer system, and the conventional operating system model targets computers designed around the sequential execution model. However, with the rapid progress of the multi-core/manycore technologies, we argue that OSes must be adapted to the underlying hardware platform to fully exploit parallelism. To illustrate this, our paper reports a study on how to perform such an adaptation for the IBM BlueGene/P multi-core system.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123525404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470809
Louis-Claude Canon
Large-scale distributed systems such as Grids constitute computational environments that are essential to academic and industry needs. However, they present uncertain behaviors due to their scales that increase continually. We propose to revisit traditional scheduling problematics in these environments by considering uncertainty in the models.
{"title":"Coping with uncertainty in scheduling problems","authors":"Louis-Claude Canon","doi":"10.1109/IPDPSW.2010.5470809","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470809","url":null,"abstract":"Large-scale distributed systems such as Grids constitute computational environments that are essential to academic and industry needs. However, they present uncertain behaviors due to their scales that increase continually. We propose to revisit traditional scheduling problematics in these environments by considering uncertainty in the models.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121970039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470934
Keqin Li
The expected file download time of the randomized time-based switching algorithm for peer selection and file downloading in a peer-to-peer (P2P) network is still unknown. The main contribution of this paper is to analyze the expected file download time of the time-based switching algorithm for file sharing in P2P networks when the service capacity of a source peer is totally correlated over time, namely, the service capacities of a source peer in different time slots are a fixed value. A recurrence relation is developed to characterize the expected file download time of the time-based switching algorithm. Is is proved that for two or more heterogeneous source peers and sufficiently large file size, the expected file download time of the time-based switching algorithm is less than and can be arbitrarily less than the expected download time of the chunk-based switching algorithm and the expected download time of the permanent connection algorithm. It is shown that the expected file download time of the time-based switching algorithm is in the range of the file size divided by the harmonic mean of service capacities and the file size divided by the arithmetic mean of service capacities. Numerical examples and data are presented to demonstrate our analytical results.
{"title":"Analysis of random time-based switching for file sharing in peer-to-peer networks","authors":"Keqin Li","doi":"10.1109/IPDPSW.2010.5470934","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470934","url":null,"abstract":"The expected file download time of the randomized time-based switching algorithm for peer selection and file downloading in a peer-to-peer (P2P) network is still unknown. The main contribution of this paper is to analyze the expected file download time of the time-based switching algorithm for file sharing in P2P networks when the service capacity of a source peer is totally correlated over time, namely, the service capacities of a source peer in different time slots are a fixed value. A recurrence relation is developed to characterize the expected file download time of the time-based switching algorithm. Is is proved that for two or more heterogeneous source peers and sufficiently large file size, the expected file download time of the time-based switching algorithm is less than and can be arbitrarily less than the expected download time of the chunk-based switching algorithm and the expected download time of the permanent connection algorithm. It is shown that the expected file download time of the time-based switching algorithm is in the range of the file size divided by the harmonic mean of service capacities and the file size divided by the arithmetic mean of service capacities. Numerical examples and data are presented to demonstrate our analytical results.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123937502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470785
Richard Price, P. Tiňo
Nodes within existing P2P networks typically exchange periodic keep-alive messages in order to maintain network connections between neighbours. Keep-alive messages serve a dual purpose, they're used to detect node failures and to prevent idle connections from being expired by NAT devices. However despite being widely used, the interval between messages are typically fixed below the timeout value of most NAT devices based upon crude rules of thumb. Furthermore, although many studies have been conducted to traverse NAT devices and other studies seek to improve failure detection in P2P overlay networks; the limitations of NAT devices have received little research attention. This paper explores algorithms which allow nodes to adapt to the timeout values of individual NAT devices and investigates the resulting trade-offs.
{"title":"Adapting to NAT timeout values in P2P overlay networks","authors":"Richard Price, P. Tiňo","doi":"10.1109/IPDPSW.2010.5470785","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470785","url":null,"abstract":"Nodes within existing P2P networks typically exchange periodic keep-alive messages in order to maintain network connections between neighbours. Keep-alive messages serve a dual purpose, they're used to detect node failures and to prevent idle connections from being expired by NAT devices. However despite being widely used, the interval between messages are typically fixed below the timeout value of most NAT devices based upon crude rules of thumb. Furthermore, although many studies have been conducted to traverse NAT devices and other studies seek to improve failure detection in P2P overlay networks; the limitations of NAT devices have received little research attention. This paper explores algorithms which allow nodes to adapt to the timeout values of individual NAT devices and investigates the resulting trade-offs.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"2019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124046905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470930
Nguyen The Tung, D. E. Baz, P. Spitéri, Guillaume Jourjon, M. Chau
This paper deals with high performance Peer-to-Peer computing applications. We concentrate on the solution of large scale numerical simulation problems via distributed iterative methods. We present the current version of an environment that allows direct communication between peers. This environment is based on a self-adaptive communication protocol. The protocol configures itself automatically and dynamically in function of application requirements like scheme of computation and elements of context like topology by choosing the most appropriate communication mode between peers. A first series of computational experiments is presented and analyzed for the obstacle problem.
{"title":"High performance Peer-to-Peer distributed computing with application to obstacle problem","authors":"Nguyen The Tung, D. E. Baz, P. Spitéri, Guillaume Jourjon, M. Chau","doi":"10.1109/IPDPSW.2010.5470930","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470930","url":null,"abstract":"This paper deals with high performance Peer-to-Peer computing applications. We concentrate on the solution of large scale numerical simulation problems via distributed iterative methods. We present the current version of an environment that allows direct communication between peers. This environment is based on a self-adaptive communication protocol. The protocol configures itself automatically and dynamically in function of application requirements like scheme of computation and elements of context like topology by choosing the most appropriate communication mode between peers. A first series of computational experiments is presented and analyzed for the obstacle problem.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125783793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470804
A. Binotto, C. Pereira, D. Fellner
High-performance platforms are required by applications that use massive calculations. Actually, desktop accelerators (like the GPUs) form a powerful heterogeneous platform in conjunction with multi-core CPUs. To improve application performance on these hybrid platforms, load-balancing plays an important role to distribute workload. However, such scheduling problem faces challenges since the cost of a task at a Processing Unit (PU) is non-deterministic and depends on parameters that cannot be known a priori, like input data, online creation of tasks, scenario changing, etc. Therefore, self-adaptive computing is a potential paradigm as it can provide flexibility to explore computational resources and improve performance on different execution scenarios. This paper presents an ongoing PhD research focused on a dynamic and reconfigurable scheduling strategy based on timing profiling for desktop accelerators. Preliminary results analyze the performance of solvers for SLEs (Systems of Linear Equations) over a hybrid CPU and multi-GPU platform applied to a CFD (Computational Fluid Dynamics) application. The decision of choosing the best solver as well as its scheduling must be performed dynamically considering online parameters in order to achieve a better application performance.
{"title":"Towards dynamic reconfigurable load-balancing for hybrid desktop platforms","authors":"A. Binotto, C. Pereira, D. Fellner","doi":"10.1109/IPDPSW.2010.5470804","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470804","url":null,"abstract":"High-performance platforms are required by applications that use massive calculations. Actually, desktop accelerators (like the GPUs) form a powerful heterogeneous platform in conjunction with multi-core CPUs. To improve application performance on these hybrid platforms, load-balancing plays an important role to distribute workload. However, such scheduling problem faces challenges since the cost of a task at a Processing Unit (PU) is non-deterministic and depends on parameters that cannot be known a priori, like input data, online creation of tasks, scenario changing, etc. Therefore, self-adaptive computing is a potential paradigm as it can provide flexibility to explore computational resources and improve performance on different execution scenarios. This paper presents an ongoing PhD research focused on a dynamic and reconfigurable scheduling strategy based on timing profiling for desktop accelerators. Preliminary results analyze the performance of solvers for SLEs (Systems of Linear Equations) over a hybrid CPU and multi-GPU platform applied to a CFD (Computational Fluid Dynamics) application. The decision of choosing the best solver as well as its scheduling must be performed dynamically considering online parameters in order to achieve a better application performance.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125832448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470812
J. Sancho, M. Lang, D. Kerbyson
The increasing core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for a wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers each with one channel compared with one controller with two memory channels.
{"title":"Analyzing the trade-off between multiple memory controllers and memory channels on multi-core processor performance","authors":"J. Sancho, M. Lang, D. Kerbyson","doi":"10.1109/IPDPSW.2010.5470812","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470812","url":null,"abstract":"The increasing core-count on current and future processors is posing critical challenges to the memory subsystem to efficiently handle concurrent memory requests. The current trend is to increase the number of memory channels available to the processor's memory controller. In this paper we investigate the effectiveness of this approach on the performance of parallel scientific applications. Specifically, we explore the trade-off between employing multiple memory channels per memory controller and the use of multiple memory controllers. Experiments conducted on two current state-of-the-art multicore processors, a 6-core AMD Istanbul and a 4-core Intel Nehalem-EP, for a wide range of production applications shows that there is a diminishing return when increasing the number of memory channels per memory controller. In addition, we show that this performance degradation can be efficiently addressed by increasing the ratio of memory controllers to channels while keeping the number of memory channels constant. Significant performance improvements can be achieved in this scheme, up to 28%, in the case of using two memory controllers each with one channel compared with one controller with two memory channels.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470728
S. Lukovic, P. Pezzino, Leandro Fiorin
Reconfigurable technologies are getting popular as an instrument not only for verification and prototyping but also for commercial implementation of Multi-Processor System-on-Chip (MPSoC) architectures. These systems, in particular Network-on-Chip (NoC) based ones, have emerged as a design strategy to cope with increased requirements and complexity of modern applications. However, the increasing heterogeneity, coupled with possibility of reconfiguration, makes security become one of major concerns in MPSoC design. In this work, we show a solution for FPGA based designs against one of the most widespread types of attacks - code injection. Our response to tackle this challenge is given in form of Stack Protection Unit (SPU) embedded into processing cores. MicroBlaze soft-core processor serves as a case study for verification of the proposed solution in FPGA technology.
{"title":"Stack protection unit as a step towards securing MPSoCs","authors":"S. Lukovic, P. Pezzino, Leandro Fiorin","doi":"10.1109/IPDPSW.2010.5470728","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470728","url":null,"abstract":"Reconfigurable technologies are getting popular as an instrument not only for verification and prototyping but also for commercial implementation of Multi-Processor System-on-Chip (MPSoC) architectures. These systems, in particular Network-on-Chip (NoC) based ones, have emerged as a design strategy to cope with increased requirements and complexity of modern applications. However, the increasing heterogeneity, coupled with possibility of reconfiguration, makes security become one of major concerns in MPSoC design. In this work, we show a solution for FPGA based designs against one of the most widespread types of attacks - code injection. Our response to tackle this challenge is given in form of Stack Protection Unit (SPU) embedded into processing cores. MicroBlaze soft-core processor serves as a case study for verification of the proposed solution in FPGA technology.","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124681035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-04-19DOI: 10.1109/IPDPSW.2010.5470914
Payal Saluja, Prahlada Rao B.B., V. Shashidhar, A. Paventhan, Neetu Sharma
Storage plays an important role in sufficing the requirements of data intensive applications in a Grid computing environment. Current Scientific applications perform complex computational analysis, and consume/produce hundreds of terabytes of data. The authors in this paper have surveyed available data grid solutions, viz., Storage Resource Broker (SRB), Grid File System (GFS), Storage Resource Manager (SRM), iRODS and WS-DAI and presented their operational experiences in Service Oriented Architecture (SOA) based GARUDA grid. SOA introduces more challenges to achieve: availability, security, scalability and performance to the storage system. Based on the survey, the authors proposed GARUDA-Storage Resource Manager (GSRM) that adheres to SRM specifications. GSRM is a disk based SRM implementation based on DPM (Disk Pool manager) architecture. It addresses the various aspects like virtualization, security, latency, performance, and data availability. We discussed how GSRM architecture can leverage CDAC's Parallel File System (C-PFS).
存储在满足网格计算环境中数据密集型应用程序的需求方面起着重要作用。当前的科学应用程序执行复杂的计算分析,并消耗/产生数百tb的数据。本文作者调查了现有的数据网格解决方案,即存储资源代理(SRB)、网格文件系统(GFS)、存储资源管理器(SRM)、iRODS和WS-DAI,并介绍了他们在基于GARUDA网格的面向服务体系结构(SOA)中的运行经验。SOA引入了更多需要实现的挑战:存储系统的可用性、安全性、可伸缩性和性能。基于调查,作者提出了遵循SRM规范的GARUDA-Storage Resource Manager (GSRM)。GSRM是一种基于DPM (disk Pool manager)架构的基于磁盘的SRM实现。它解决了虚拟化、安全性、延迟、性能和数据可用性等各个方面的问题。我们讨论了GSRM架构如何利用CDAC的并行文件系统(C-PFS)。
{"title":"An interoperable & optimal data grid solution for heterogeneous and SOA based Grid- GARUDA","authors":"Payal Saluja, Prahlada Rao B.B., V. Shashidhar, A. Paventhan, Neetu Sharma","doi":"10.1109/IPDPSW.2010.5470914","DOIUrl":"https://doi.org/10.1109/IPDPSW.2010.5470914","url":null,"abstract":"Storage plays an important role in sufficing the requirements of data intensive applications in a Grid computing environment. Current Scientific applications perform complex computational analysis, and consume/produce hundreds of terabytes of data. The authors in this paper have surveyed available data grid solutions, viz., Storage Resource Broker (SRB), Grid File System (GFS), Storage Resource Manager (SRM), iRODS and WS-DAI and presented their operational experiences in Service Oriented Architecture (SOA) based GARUDA grid. SOA introduces more challenges to achieve: availability, security, scalability and performance to the storage system. Based on the survey, the authors proposed GARUDA-Storage Resource Manager (GSRM) that adheres to SRM specifications. GSRM is a disk based SRM implementation based on DPM (Disk Pool manager) architecture. It addresses the various aspects like virtualization, security, latency, performance, and data availability. We discussed how GSRM architecture can leverage CDAC's Parallel File System (C-PFS).","PeriodicalId":329280,"journal":{"name":"2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129847220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}