Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210020
O. Sievert, H. Casanova
Despite the enormous amount of research and development work in the area of parallel computing, it is a common observation that simultaneous performance and ease-of-use are elusive. We believe that ease-of-use is critical for many end users, and thus seek performance enhancing techniques that can be easily retrofitted to existing parallel applications. In a precious paper we have presented MPI (message passing interface) process swapping, a simple add-on to the MPI programming environment that can improve performance in shared computing environments. MPI process swapping requires as few as three lines of source code change to an existing application. In this paper we explore a question that we had left open in our previous work: based on which policies should processes be swapped for best performance? Our results show that, with adequate swapping policies, MPI process swapping can provide substantial performance benefits with very limited implementation effort.
{"title":"Policies for swapping MPI processes","authors":"O. Sievert, H. Casanova","doi":"10.1109/HPDC.2003.1210020","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210020","url":null,"abstract":"Despite the enormous amount of research and development work in the area of parallel computing, it is a common observation that simultaneous performance and ease-of-use are elusive. We believe that ease-of-use is critical for many end users, and thus seek performance enhancing techniques that can be easily retrofitted to existing parallel applications. In a precious paper we have presented MPI (message passing interface) process swapping, a simple add-on to the MPI programming environment that can improve performance in shared computing environments. MPI process swapping requires as few as three lines of source code change to an existing application. In this paper we explore a question that we had left open in our previous work: based on which policies should processes be swapped for best performance? Our results show that, with adequate swapping policies, MPI process swapping can provide substantial performance benefits with very limited implementation effort.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210027
Xuxian Jiang, Dongyan Xu
The grid is realizing the vision of providing computation as utility: computational jobs can be scheduled on-demand at grid hosts based on available computational capacity. In this project, we study another emerging usage of grid utility: the hosting of application services. Different from a computational job, an application service such as an e-Laboratory or an on-line business has longer lifetime, and performs multiple jobs requested by its clients. A service hosting utility platform (HUP) is formed by a set of hosts in the grid, and multiple application services will be hosted on the HUP. SODA is a service-on-demand architecture that enables on-demand creation of application services on a HUP. With SODA, an application service will be created in the form of a set of virtual service nodes; each node is a virtual machine which is physically a 'slice' of a real host in the HUP. SODA involves both OS and middleware techniques, and has the following salient capabilities: (1) on-demand service priming: the image of an application service as well as the OS on which it runs will be created on-demand and bootstrapped automatically; (2) better service isolation: services sharing the same HUP host are isolated with respect to administration, faults, intrusion, and resources; (3) integrated service load management: for each service, a service switch will be created to direct client requests to appropriate virtual service nodes. Moreover, the application service provider can replace the default request switching policy with a service-specific policy.
{"title":"SODA: a service-on-demand architecture for application service hosting utility platforms","authors":"Xuxian Jiang, Dongyan Xu","doi":"10.1109/HPDC.2003.1210027","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210027","url":null,"abstract":"The grid is realizing the vision of providing computation as utility: computational jobs can be scheduled on-demand at grid hosts based on available computational capacity. In this project, we study another emerging usage of grid utility: the hosting of application services. Different from a computational job, an application service such as an e-Laboratory or an on-line business has longer lifetime, and performs multiple jobs requested by its clients. A service hosting utility platform (HUP) is formed by a set of hosts in the grid, and multiple application services will be hosted on the HUP. SODA is a service-on-demand architecture that enables on-demand creation of application services on a HUP. With SODA, an application service will be created in the form of a set of virtual service nodes; each node is a virtual machine which is physically a 'slice' of a real host in the HUP. SODA involves both OS and middleware techniques, and has the following salient capabilities: (1) on-demand service priming: the image of an application service as well as the OS on which it runs will be created on-demand and bootstrapped automatically; (2) better service isolation: services sharing the same HUP host are isolated with respect to administration, faults, intrusion, and resources; (3) integrated service load management: for each service, a service switch will be created to direct client requests to appropriate virtual service nodes. Moreover, the application service provider can replace the default request switching policy with a service-specific policy.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122948006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210032
C. Schmidt, M. Parashar
The ability to efficiently discover information using partial knowledge (for example keywords, attributes or ranges) is important in large, decentralized, resource sharing distributed environments such as computational grids and peer-to-peer (P2P) storage and retrieval systems. This paper presents a P2P information discovery system that supports flexible queries using partial keywords and wildcards, and range queries. It guarantees that all existing data elements that match a query are found with bounded costs in terms of number of messages and number of peers involved. The key innovation is a dimension reducing indexing scheme that effectively maps the multidimensional information space to physical peers. The design, implementation and experimental evaluation of the system are presented.
{"title":"Flexible information discovery in decentralized distributed systems","authors":"C. Schmidt, M. Parashar","doi":"10.1109/HPDC.2003.1210032","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210032","url":null,"abstract":"The ability to efficiently discover information using partial knowledge (for example keywords, attributes or ranges) is important in large, decentralized, resource sharing distributed environments such as computational grids and peer-to-peer (P2P) storage and retrieval systems. This paper presents a P2P information discovery system that supports flexible queries using partial keywords and wildcards, and range queries. It guarantees that all existing data elements that match a query are found with bounded costs in terms of number of messages and number of peers involved. The key innovation is a dimension reducing indexing scheme that effectively maps the multidimensional information space to physical peers. The design, implementation and experimental evaluation of the system are presented.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130618927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210018
Rajesh Raman, M. Livny, M. Solomon
Dynamic, heterogeneous and distributively owned resource environments present unique challenges to the problems of resource representation, allocation and management. Conventional resource management methods that rely on static models of resource allocation policy and behavior fail to address these challenges. We previously argued that Matchmaking provides an elegant and robust solution to resource management in such dynamic and federated environments. However, Matchmaking is limited by its purely bilateral formalism of matching a single customer with a single resource, precluding more advanced resource management services such as co-allocation. In this paper, we present Gangmatching, a multilateral extension to the Matchmaking model, and discuss the Gangmatching model and its associated implementation and performance issues in context of a real-world license management co-allocation problem.
{"title":"Policy driven heterogeneous resource co-allocation with Gangmatching","authors":"Rajesh Raman, M. Livny, M. Solomon","doi":"10.1109/HPDC.2003.1210018","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210018","url":null,"abstract":"Dynamic, heterogeneous and distributively owned resource environments present unique challenges to the problems of resource representation, allocation and management. Conventional resource management methods that rely on static models of resource allocation policy and behavior fail to address these challenges. We previously argued that Matchmaking provides an elegant and robust solution to resource management in such dynamic and federated environments. However, Matchmaking is limited by its purely bilateral formalism of matching a single customer with a single resource, precluding more advanced resource management services such as co-allocation. In this paper, we present Gangmatching, a multilateral extension to the Matchmaking model, and discuss the Gangmatching model and its associated implementation and performance issues in context of a real-world license management co-allocation problem.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134015309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210014
A. Takefusa, O. Tatebe, S. Matsuoka, Y. Morita
Data Grid is a Grid for ubiquitous access and analysis of large-scale data. Because Data Grid is in the early stages of development, the performance of its petabyte-scale models in a realistic data processing setting has not been well investigated. By enhancing our Bricks Grid simulator to accommodated Data Grid scenarios, we investigate and compare the performance of different Data Grid models. These are categorized mainly as either central or tier models; they employ various scheduling and replication strategies under realistic assumptions of job processing for CERN LHC experiments on the Grid Datafarm system. Our results show that the central model is efficient but that the tier model, with its greater resources and its speculative class of background replication policies, are quite effective and achieve higher performance, while each tier is smaller than the central model.
{"title":"Performance analysis of scheduling and replication algorithms on Grid Datafarm architecture for high-energy physics applications","authors":"A. Takefusa, O. Tatebe, S. Matsuoka, Y. Morita","doi":"10.1109/HPDC.2003.1210014","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210014","url":null,"abstract":"Data Grid is a Grid for ubiquitous access and analysis of large-scale data. Because Data Grid is in the early stages of development, the performance of its petabyte-scale models in a realistic data processing setting has not been well investigated. By enhancing our Bricks Grid simulator to accommodated Data Grid scenarios, we investigate and compare the performance of different Data Grid models. These are categorized mainly as either central or tier models; they employ various scheduling and replication strategies under realistic assumptions of job processing for CERN LHC experiments on the Grid Datafarm system. Our results show that the central model is efficient but that the tier model, with its greater resources and its speculative class of background replication policies, are quite effective and achieve higher performance, while each tier is smaller than the central model.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130631021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210015
Von Welch, F. Siebenlist, Ian T Foster, J. Bresnahan, K. Czajkowski, Jarek Gawor, C. Kesselman, Sam Meder, L. Pearlman, S. Tuecke
Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed "virtual organizations." The dynamic and multiinstitutional nature of these environments introduces challenging security issues that demand new technical approaches. In particular, one must deal with diverse local mechanisms, support dynamic creation of services, and enable dynamic creation of trust domains. We describe how these issues are addressed in two generations of the Globus Toolkit/spl reg/. First, we review the Globus Toolkit version 2 (GT2) approach; then we describe new approaches developed to support the Globus Toolkit version 3 (GT3) implementation of the Open Grid Services Architecture, an initiative that is recasting Grid concepts within a service-oriented framework based on Web services. GT3's security implementation uses Web services security mechanisms for credential exchange and other purposes, and introduces a tight least-privilege model that avoids the need for any privileged network service.
{"title":"Security for Grid services","authors":"Von Welch, F. Siebenlist, Ian T Foster, J. Bresnahan, K. Czajkowski, Jarek Gawor, C. Kesselman, Sam Meder, L. Pearlman, S. Tuecke","doi":"10.1109/HPDC.2003.1210015","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210015","url":null,"abstract":"Grid computing is concerned with the sharing and coordinated use of diverse resources in distributed \"virtual organizations.\" The dynamic and multiinstitutional nature of these environments introduces challenging security issues that demand new technical approaches. In particular, one must deal with diverse local mechanisms, support dynamic creation of services, and enable dynamic creation of trust domains. We describe how these issues are addressed in two generations of the Globus Toolkit/spl reg/. First, we review the Globus Toolkit version 2 (GT2) approach; then we describe new approaches developed to support the Globus Toolkit version 3 (GT3) implementation of the Open Grid Services Architecture, an initiative that is recasting Grid concepts within a service-oriented framework based on Web services. GT3's security implementation uses Web services security mechanisms for credential exchange and other purposes, and introduces a tight least-privilege model that avoids the need for any privileged network service.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126848505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210019
J. Chase, David E. Irwin, Laura E. Grit, Justin D. Moore, Sara Sprenkle
This paper presents new mechanisms for dynamic resource management in a cluster manager called Cluster-on-Demand (COD). COD allocates servers from a common pool to multiple virtual clusters (vclusters), with independently configured software environments, name spaces, user access controls, and network storage volumes. We present experiments using the popular Sun GridEngine batch scheduler to demonstrate that dynamic virtual clusters are an enabling abstraction for advanced resource management in computing utilities and grids. In particular, they support dynamic, policy-based cluster sharing between local users and hosted Grid services, resource reservation and adaptive provisioning, scavenging of the idle resources, and dynamic instantiation of Grid services. These goals are achieved in a direct and general way through a new set of fundamental cluster management functions, with minimal impact on the Grid middleware itself.
{"title":"Dynamic virtual clusters in a grid site manager","authors":"J. Chase, David E. Irwin, Laura E. Grit, Justin D. Moore, Sara Sprenkle","doi":"10.1109/HPDC.2003.1210019","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210019","url":null,"abstract":"This paper presents new mechanisms for dynamic resource management in a cluster manager called Cluster-on-Demand (COD). COD allocates servers from a common pool to multiple virtual clusters (vclusters), with independently configured software environments, name spaces, user access controls, and network storage volumes. We present experiments using the popular Sun GridEngine batch scheduler to demonstrate that dynamic virtual clusters are an enabling abstraction for advanced resource management in computing utilities and grids. In particular, they support dynamic, policy-based cluster sharing between local users and hosted Grid services, resource reservation and adaptive provisioning, scavenging of the idle resources, and dynamic instantiation of Grid services. These goals are achieved in a direct and general way through a new set of fundamental cluster management functions, with minimal impact on the Grid middleware itself.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210023
Soonwook Hwang, C. Kesselman
The generic, heterogeneous, and dynamic nature of the grid requires a new from of failure recovery mechanism to address its unique requirements such as support for diverse failure handling strategies, separation of failure handling strategies from application codes, and user-defined exception handling. We here propose a grid workflow system (grid-WFS), a flexible failure handling framework for the grid, which addresses these grid-unique failure recovery requirements. Central to the framework is flexibility by the use of workflow structure as a high-level recovery policy specification. We show how this use of high-level workflow structure allows users to achieve failure recovery in a variety of ways depending on the requirements and constraints of their applications. We also demonstrate that this use of workflow structure enables users to not only rapidly prototype and investigate failure handling strategies, but also easily change them by simply modifying the encompassing workflow structure, while the application code remains intact. Finally, we present an experimental evaluation of our framework using a simulation, demonstrating the value of supporting multiple failure recovery techniques in grid systems to achieve high performance in the presence of failures.
{"title":"Grid workflow: a flexible failure handling framework for the grid","authors":"Soonwook Hwang, C. Kesselman","doi":"10.1109/HPDC.2003.1210023","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210023","url":null,"abstract":"The generic, heterogeneous, and dynamic nature of the grid requires a new from of failure recovery mechanism to address its unique requirements such as support for diverse failure handling strategies, separation of failure handling strategies from application codes, and user-defined exception handling. We here propose a grid workflow system (grid-WFS), a flexible failure handling framework for the grid, which addresses these grid-unique failure recovery requirements. Central to the framework is flexibility by the use of workflow structure as a high-level recovery policy specification. We show how this use of high-level workflow structure allows users to achieve failure recovery in a variety of ways depending on the requirements and constraints of their applications. We also demonstrate that this use of workflow structure enables users to not only rapidly prototype and investigate failure handling strategies, but also easily change them by simply modifying the encompassing workflow structure, while the application code remains intact. Finally, we present an experimental evaluation of our framework using a simulation, demonstrating the value of supporting multiple failure recovery techniques in grid systems to achieve high performance in the presence of failures.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130286657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210013
P. Balaji, Jiesheng Wu, T. Kurç, Ümit V. Çatalyürek, D. Panda, J. Saltz
The challenging issues in supporting data intensive applications on clusters include efficient movement of large volumes of data between processor memories and efficient coordination of data movement and processing by a runtime support to achieve high performance. Such applications have several requirements such as guarantees in performance, scalability with these guarantees and adaptability to heterogeneous environments. With the advent of user-level protocols like the Virtual Interface Architecture (VIA) and the modern InfiniBand Architecture, the latency and bandwidth experienced by applications has approached to that of the physical network on clusters. In order to enable applications written on top of TCP/IP to take advantage of the high performance of these user-level protocols, researchers have come up with a number of techniques including User Level Sockets Layers over high performance protocols. In this paper, we study the performance and limitations of such substrate, referred to here as SocketVIA, using a component framework designed to provide runtime support for data intensive applications. The experimental results show that by reorganizing certain components of an application (in our case, the partitioning of a dataset into smaller data chunks), we can make significant improvements in application performance. This leads to a higher scalability of applications with performance guarantees. It also allows fine grained load balancing, hence making applications more adaptable to heterogeneity in resource availability. The experimental results also show that the different performance characteristics of SocketVIA allow a more efficient partitioning of data at the source nodes, thus improving the performance of the application up to an order of magnitude in some cases.
{"title":"Impact of high performance sockets on data intensive applications","authors":"P. Balaji, Jiesheng Wu, T. Kurç, Ümit V. Çatalyürek, D. Panda, J. Saltz","doi":"10.1109/HPDC.2003.1210013","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210013","url":null,"abstract":"The challenging issues in supporting data intensive applications on clusters include efficient movement of large volumes of data between processor memories and efficient coordination of data movement and processing by a runtime support to achieve high performance. Such applications have several requirements such as guarantees in performance, scalability with these guarantees and adaptability to heterogeneous environments. With the advent of user-level protocols like the Virtual Interface Architecture (VIA) and the modern InfiniBand Architecture, the latency and bandwidth experienced by applications has approached to that of the physical network on clusters. In order to enable applications written on top of TCP/IP to take advantage of the high performance of these user-level protocols, researchers have come up with a number of techniques including User Level Sockets Layers over high performance protocols. In this paper, we study the performance and limitations of such substrate, referred to here as SocketVIA, using a component framework designed to provide runtime support for data intensive applications. The experimental results show that by reorganizing certain components of an application (in our case, the partitioning of a dataset into smaller data chunks), we can make significant improvements in application performance. This leads to a higher scalability of applications with performance guarantees. It also allows fine grained load balancing, hence making applications more adaptable to heterogeneity in resource availability. The experimental results also show that the different performance characteristics of SocketVIA allow a more efficient partitioning of data at the source nodes, thus improving the performance of the application up to an order of magnitude in some cases.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128232757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-22DOI: 10.1109/HPDC.2003.1210029
A. Ivan, V. Karamcheti
Increasingly, scalable distributed applications are being constructed by integrating reusable components spanning multiple administrative domains. Dynamic composition and deployment of such applications enables flexible QoS-aware adaptation to changing client and network characteristics. However, dynamic deployment across multiple administrative domains needs to perform cross-domain authentication and authorization, and satisfy various network and application-level constraints that may only be expressed in terms meaningful within a particular domain. Our solution to these problems, developed as part of the partitionable services framework, integrates a decentralized trust management and access control system (dRBAC) with a programming and run-time abstraction (object views). dRBAC encodes statements within and across domains using cryptographically signed credentials, providing a unifying and powerful mechanism for cross-domain authorization and expression of network and application constraints. Views define multiple implementations of a reusable component, thus enriching the set of components available for dynamic deployment and enabling fine-grained, customizable access control. We describe the runtime support for views, which consists of a view generator (VIG) and a host-level communication resource (Switchboard) for creating secure channels between pairs of components. We present a simple mail application to illustrate how dRBAC, views, and Switchboard can be used to customize reusable components and securely deploy them in heterogeneous environments.
{"title":"Using views for customizing reusable components in component-based frameworks","authors":"A. Ivan, V. Karamcheti","doi":"10.1109/HPDC.2003.1210029","DOIUrl":"https://doi.org/10.1109/HPDC.2003.1210029","url":null,"abstract":"Increasingly, scalable distributed applications are being constructed by integrating reusable components spanning multiple administrative domains. Dynamic composition and deployment of such applications enables flexible QoS-aware adaptation to changing client and network characteristics. However, dynamic deployment across multiple administrative domains needs to perform cross-domain authentication and authorization, and satisfy various network and application-level constraints that may only be expressed in terms meaningful within a particular domain. Our solution to these problems, developed as part of the partitionable services framework, integrates a decentralized trust management and access control system (dRBAC) with a programming and run-time abstraction (object views). dRBAC encodes statements within and across domains using cryptographically signed credentials, providing a unifying and powerful mechanism for cross-domain authorization and expression of network and application constraints. Views define multiple implementations of a reusable component, thus enriching the set of components available for dynamic deployment and enabling fine-grained, customizable access control. We describe the runtime support for views, which consists of a view generator (VIG) and a host-level communication resource (Switchboard) for creating secure channels between pairs of components. We present a simple mail application to illustrate how dRBAC, views, and Switchboard can be used to customize reusable components and securely deploy them in heterogeneous environments.","PeriodicalId":430378,"journal":{"name":"High Performance Distributed Computing, 2003. Proceedings. 12th IEEE International Symposium on","volume":"102 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116295907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}