Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040892
X. Qin, Hong Jiang, D. Swanson
In this paper, we investigate an efficient off-line scheduling algorithm in which real-time tasks with precedence constraints are executed in a heterogeneous environment. It provides more features and capabilities than existing algorithms that schedule only independent tasks in real-time homogeneous systems. In addition, the proposed algorithm takes the heterogeneities of computation, communication and reliability into account, thereby improving the reliability. To provide fault-tolerant capability, the algorithm employs a primary-backup copy scheme that enables the system to tolerate permanent failures in any single processor. In this scheme, a backup copy is allowed to overlap with other backup copies on the same processor, as long as their corresponding primary copies are allocated to different processors. Tasks are judiciously allocated to processors so as to reduce the schedule length as well as the reliability cost, defined to be the product of processor failure rate and task execution time. In addition, the time for detecting and handling a permanent fault is incorporated into the scheduling scheme, thus making the algorithm more practical. To quantify the combined performance of fault-tolerance and schedulability, the performability measure is introduced Compared with the existing scheduling algorithms in the literature, our scheduling algorithm achieves an average of 16.4% improvement in reliability and an average of 49.3% improvement in performability.
{"title":"An efficient fault-tolerant scheduling algorithm for real-time tasks with precedence constraints in heterogeneous systems","authors":"X. Qin, Hong Jiang, D. Swanson","doi":"10.1109/ICPP.2002.1040892","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040892","url":null,"abstract":"In this paper, we investigate an efficient off-line scheduling algorithm in which real-time tasks with precedence constraints are executed in a heterogeneous environment. It provides more features and capabilities than existing algorithms that schedule only independent tasks in real-time homogeneous systems. In addition, the proposed algorithm takes the heterogeneities of computation, communication and reliability into account, thereby improving the reliability. To provide fault-tolerant capability, the algorithm employs a primary-backup copy scheme that enables the system to tolerate permanent failures in any single processor. In this scheme, a backup copy is allowed to overlap with other backup copies on the same processor, as long as their corresponding primary copies are allocated to different processors. Tasks are judiciously allocated to processors so as to reduce the schedule length as well as the reliability cost, defined to be the product of processor failure rate and task execution time. In addition, the time for detecting and handling a permanent fault is incorporated into the scheduling scheme, thus making the algorithm more practical. To quantify the combined performance of fault-tolerance and schedulability, the performability measure is introduced Compared with the existing scheduling algorithms in the literature, our scheduling algorithm achieves an average of 16.4% improvement in reliability and an average of 49.3% improvement in performability.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129974706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040902
Hung-Chang Hsiao, C. King
A wide-area service discovery infrastructure provides a repository in which services over a wide area can register themselves and clients everywhere can inquire about them. We discuss how to build such an infrastructure based on the peer-to-peer model. The proposed system, called Neuron, can be executed on top of a set of federated nodes across the global network and aggregate their resources to provide the discovery service. Neuron is self-organizing, self-tuning, and capable of tolerating failures of nodes and communication links. In addition, it allows the services to be described with arbitrary forms and the system load to be distributed evenly to the nodes. Neuron also supports event notification. We evaluated Neuron via simulation. The preliminary results show that service registration, discovery and service state advertising take at most O(log N) hops to complete.
{"title":"Neuron-a wide-area service discovery infrastructure","authors":"Hung-Chang Hsiao, C. King","doi":"10.1109/ICPP.2002.1040902","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040902","url":null,"abstract":"A wide-area service discovery infrastructure provides a repository in which services over a wide area can register themselves and clients everywhere can inquire about them. We discuss how to build such an infrastructure based on the peer-to-peer model. The proposed system, called Neuron, can be executed on top of a set of federated nodes across the global network and aggregate their resources to provide the discovery service. Neuron is self-organizing, self-tuning, and capable of tolerating failures of nodes and communication links. In addition, it allows the services to be described with arbitrary forms and the system load to be distributed evenly to the nodes. Neuron also supports event notification. We evaluated Neuron via simulation. The preliminary results show that service registration, discovery and service state advertising take at most O(log N) hops to complete.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127189703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040882
Xiaoxing Ma, A. Chan, Jian Lu
This paper presents a novel approach, called WebGOP, for architecture modeling and programming of web-based distributed applications. WebGOP uses the graph-oriented programming (GOP) mode, under which the components of a distributed program are configured as a logical graph and implemented using a set of operations defined over the graph. WebGOP extends the application of GOP to the World Wide Web environment and provides more powerful architectural support. In WebGOP, the architecture graph is reified as an explicit object which itself is distributed over the network providing a graph-oriented context for the execution of distributed applications. The programmer can specialize the type of a graph to represent a particular architecture style tailored for an application. WebGOP also has built-in support for flexible and dynamic architectures, including dynamic reconfiguration. We describe the WebGOP framework, a prototypical implementation of the framework on top of SOAP, and performance evaluation of the prototype. Results of the performance evaluation showed that the overhead introduced by WebGOP over SOAP is reasonable and acceptable.
{"title":"WebGOP: A framework for architecting and programming dynamic distributed Web applications","authors":"Xiaoxing Ma, A. Chan, Jian Lu","doi":"10.1109/ICPP.2002.1040882","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040882","url":null,"abstract":"This paper presents a novel approach, called WebGOP, for architecture modeling and programming of web-based distributed applications. WebGOP uses the graph-oriented programming (GOP) mode, under which the components of a distributed program are configured as a logical graph and implemented using a set of operations defined over the graph. WebGOP extends the application of GOP to the World Wide Web environment and provides more powerful architectural support. In WebGOP, the architecture graph is reified as an explicit object which itself is distributed over the network providing a graph-oriented context for the execution of distributed applications. The programmer can specialize the type of a graph to represent a particular architecture style tailored for an application. WebGOP also has built-in support for flexible and dynamic architectures, including dynamic reconfiguration. We describe the WebGOP framework, a prototypical implementation of the framework on top of SOAP, and performance evaluation of the prototype. Results of the performance evaluation showed that the overhead introduced by WebGOP over SOAP is reasonable and acceptable.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040919
Qingfeng Zhuge, Z. Shao, E. Sha
Code size expansion of software-pipelined loops is a critical problem for DSP systems with strict code size constraint. Some ad-hoc code size reduction techniques were used to try to reduce the prologue/epilogue produced by software pipelining. We present the fundamental understanding of the relationship between code size expansion and software pipelining. Based on the retiming concept, we present a powerful Code-size REDuction (CRED) technique and its application on various kinds of processors. We also provide CRED algorithms integrated with the software pipelining process. One advantage of our algorithms is that it can explore the trade-off space between "perfect" software pipelining and constrained code size. That is, the software pipelining process can be controlled to generate a schedule concerned with code size requirement. The experiment results show the effectiveness of our algorithms in both reducing the code size for software-pipelined loops and exploring the code size/performance trade-off space.
{"title":"Optimal code size reduction for software-pipelined loops on DSP applications","authors":"Qingfeng Zhuge, Z. Shao, E. Sha","doi":"10.1109/ICPP.2002.1040919","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040919","url":null,"abstract":"Code size expansion of software-pipelined loops is a critical problem for DSP systems with strict code size constraint. Some ad-hoc code size reduction techniques were used to try to reduce the prologue/epilogue produced by software pipelining. We present the fundamental understanding of the relationship between code size expansion and software pipelining. Based on the retiming concept, we present a powerful Code-size REDuction (CRED) technique and its application on various kinds of processors. We also provide CRED algorithms integrated with the software pipelining process. One advantage of our algorithms is that it can explore the trade-off space between \"perfect\" software pipelining and constrained code size. That is, the software pipelining process can be controlled to generate a schedule concerned with code size requirement. The experiment results show the effectiveness of our algorithms in both reducing the code size for software-pipelined loops and exploring the code size/performance trade-off space.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121323845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040896
Seungjin Park, B. V. Voorst
The route discovery and maintenance processes in wireless mobile networks are very expensive tasks due to the mobility of the host. Route discovery requires a considerable amount of resources and therefore it is wise to utilize the effort already invested in existing paths. This paper proposes a dynamic hybrid routing (DHR) protocol in ad hoc networks, which constructs paths only upon demand by taking attributes from both proactive and reactive algorithms. The goal of DHR is to re-use, whenever possible, portions of several existing paths when establishing a new path. The reusability is accomplished by using dynamic proactive zones (PZs), through which nearby existing path information is disseminated. By utilizing the information stored in PZs, considerable savings (in time and traffic) can be achieved over other on-demand routing algorithms that use flooding. In other route-finding algorithms, proactive zones are formed throughout the network and remain unchanged, whereas in DHR, routes are created and destroyed dynamically around the existing paths. Even though DHR may not find the shortest path between source and destination, it does reduce the amount of traffic needed to find a path and therefore increases the available bandwidth for data transfer.
{"title":"Dynamic hybrid routing (DHR) in mobile ad hoc networks","authors":"Seungjin Park, B. V. Voorst","doi":"10.1109/ICPP.2002.1040896","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040896","url":null,"abstract":"The route discovery and maintenance processes in wireless mobile networks are very expensive tasks due to the mobility of the host. Route discovery requires a considerable amount of resources and therefore it is wise to utilize the effort already invested in existing paths. This paper proposes a dynamic hybrid routing (DHR) protocol in ad hoc networks, which constructs paths only upon demand by taking attributes from both proactive and reactive algorithms. The goal of DHR is to re-use, whenever possible, portions of several existing paths when establishing a new path. The reusability is accomplished by using dynamic proactive zones (PZs), through which nearby existing path information is disseminated. By utilizing the information stored in PZs, considerable savings (in time and traffic) can be achieved over other on-demand routing algorithms that use flooding. In other route-finding algorithms, proactive zones are formed throughout the network and remain unchanged, whereas in DHR, routes are created and destroyed dynamically around the existing paths. Even though DHR may not find the shortest path between source and destination, it does reduce the amount of traffic needed to find a path and therefore increases the available bandwidth for data transfer.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040854
T. Monreal, V. Viñals, Antonio González, M. Valero
Register files are becoming one of the critical components of current out-of-order processors in terms of delay and power consumption, since their potential to exploit instruction-level parallelism is quite related to the size and number of ports of the register file. In conventional register renaming schemes, register releasing is conservatively done only after the instruction that redefines the same register is committed. Instead, we propose a scheme that releases registers as soon as the processor knows that there will be no further use of them. We present two early releasing hardware implementations with different performance/complexity trade-offs. Detailed cycle-level simulations show either a significant speedup for a given register file size, or a reduction in register file size for a given performance level.
{"title":"Hardware schemes for early register release","authors":"T. Monreal, V. Viñals, Antonio González, M. Valero","doi":"10.1109/ICPP.2002.1040854","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040854","url":null,"abstract":"Register files are becoming one of the critical components of current out-of-order processors in terms of delay and power consumption, since their potential to exploit instruction-level parallelism is quite related to the size and number of ports of the register file. In conventional register renaming schemes, register releasing is conservatively done only after the instruction that redefines the same register is committed. Instead, we propose a scheme that releases registers as soon as the processor knows that there will be no further use of them. We present two early releasing hardware implementations with different performance/complexity trade-offs. Detailed cycle-level simulations show either a significant speedup for a given register file size, or a reduction in register file size for a given performance level.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134457611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040907
Chih-Fang Wang, S. Sahni
We develop efficient algorithms for problems in computational geometry-convex hull, smallest enclosing box, ECDF two-set dominance, maximal points, all-nearest neighbor and closest-pair-on the OTIS-Mesh optoelectronic computer We also demonstrate the algorithms for computing convex hull and prefix sum with condition on a multi-dimensional mesh, which are used to compute convex hull and ECDF respectively. We show that all these problems can be solved in O(/spl radic/N) time even with N/sup 2/ inputs.
{"title":"Computational geometry on the OTIS-Mesh optoelectronic computer","authors":"Chih-Fang Wang, S. Sahni","doi":"10.1109/ICPP.2002.1040907","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040907","url":null,"abstract":"We develop efficient algorithms for problems in computational geometry-convex hull, smallest enclosing box, ECDF two-set dominance, maximal points, all-nearest neighbor and closest-pair-on the OTIS-Mesh optoelectronic computer We also demonstrate the algorithms for computing convex hull and prefix sum with condition on a multi-dimensional mesh, which are used to compute convex hull and ECDF respectively. We show that all these problems can be solved in O(/spl radic/N) time even with N/sup 2/ inputs.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127569764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040883
Gurdip Singh, Ye Su
The development of correct synchronization code for distributed programs is a challenging task. In this paper, we propose an aspect oriented technique for developing synchronization code for message passing systems. Our approach is to factor out synchronization as a separate aspect, synthesize synchronization code and then compose it with the functional code. Specifically, we allow the designer of an application to first design the functional code. The designer can then annotate the functional code with regions and specify a high-level "global invariant" specifying the synchronization policy. A synchronization policy essentially gives the occupancy rules for the various regions. The solution to this problem, which we term the region synchronization problem, involves deriving a set of rules for entering and exiting each region. We provide a systematic invariant into a message passing algorithm for a point-to-point message passing system. We show that many existing synchronization problems can be specified as instances of the region synchronization problem. Hence, our algorithms can be used to solve a large class of synchronization problems.
{"title":"Region synchronization in message passing systems","authors":"Gurdip Singh, Ye Su","doi":"10.1109/ICPP.2002.1040883","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040883","url":null,"abstract":"The development of correct synchronization code for distributed programs is a challenging task. In this paper, we propose an aspect oriented technique for developing synchronization code for message passing systems. Our approach is to factor out synchronization as a separate aspect, synthesize synchronization code and then compose it with the functional code. Specifically, we allow the designer of an application to first design the functional code. The designer can then annotate the functional code with regions and specify a high-level \"global invariant\" specifying the synchronization policy. A synchronization policy essentially gives the occupancy rules for the various regions. The solution to this problem, which we term the region synchronization problem, involves deriving a set of rules for entering and exiting each region. We provide a systematic invariant into a message passing algorithm for a point-to-point message passing system. We show that many existing synchronization problems can be specified as instances of the region synchronization problem. Hence, our algorithms can be used to solve a large class of synchronization problems.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115331253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040876
Y. Nam, D. Kim, Tae-Young Choe, Chan-Ik Park
With a large number of internal disks and the rapid growth of disk capacity, storage systems become more susceptible to double disk failures. Thus, the need for such reliable storage systems as RAID6 is expected to gain in importance. However RAID6 architectures such as RM2, P+Q, EVEN-ODD, and DATUM traditionally suffer from a low write I/O performance caused by updating two distinctive parity data associated with user data. To overcome such a low write I/O performance, we propose an enhanced RM2 architecture which combines RM2, one of the well-known RAID6 architectures, with a Lazy Parity Update (LPU) technique. Extensive performance evaluations reveal that the write I/O performance of the proposed architecture is about two times higher than that of RM2 under various I/O workloads with little degradation in reliability.
{"title":"Enhancing write I/O performance of disk array RM2 tolerating double disk failures","authors":"Y. Nam, D. Kim, Tae-Young Choe, Chan-Ik Park","doi":"10.1109/ICPP.2002.1040876","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040876","url":null,"abstract":"With a large number of internal disks and the rapid growth of disk capacity, storage systems become more susceptible to double disk failures. Thus, the need for such reliable storage systems as RAID6 is expected to gain in importance. However RAID6 architectures such as RM2, P+Q, EVEN-ODD, and DATUM traditionally suffer from a low write I/O performance caused by updating two distinctive parity data associated with user data. To overcome such a low write I/O performance, we propose an enhanced RM2 architecture which combines RM2, one of the well-known RAID6 architectures, with a Lazy Parity Update (LPU) technique. Extensive performance evaluations reveal that the write I/O performance of the proposed architecture is about two times higher than that of RM2 under various I/O workloads with little degradation in reliability.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127099899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-08-18DOI: 10.1109/ICPP.2002.1040893
W. Fang, Cho-Li Wang, F. Lau
We present the design of a global object space in a distributed Java Virtual Machine that supports parallel execution of a multi-threaded Java program on a cluster of computers. The global object space virtualizes a single Java object heap across machine boundaries to facilitate transparent object accesses. Based on the object connectivity information that is available at runtime, the object reachable from threads at different nodes, called a distributed-shared object, are detected With the detection of distributed-shared objects, we can alleviate overheads in maintaining the memory consistency within the global object space. Several runtime optimization methods have been incorporated in the global object space design, including an object home migration method that reallocates the home of a distributed-shared object, synchronized method migration that allows the remote execution of a synchronized method at the home node of its synchronized object, and object pushing that uses the object connectivity information to improve access locality.
{"title":"Efficient global object space support for distributed JVM on cluster","authors":"W. Fang, Cho-Li Wang, F. Lau","doi":"10.1109/ICPP.2002.1040893","DOIUrl":"https://doi.org/10.1109/ICPP.2002.1040893","url":null,"abstract":"We present the design of a global object space in a distributed Java Virtual Machine that supports parallel execution of a multi-threaded Java program on a cluster of computers. The global object space virtualizes a single Java object heap across machine boundaries to facilitate transparent object accesses. Based on the object connectivity information that is available at runtime, the object reachable from threads at different nodes, called a distributed-shared object, are detected With the detection of distributed-shared objects, we can alleviate overheads in maintaining the memory consistency within the global object space. Several runtime optimization methods have been incorporated in the global object space design, including an object home migration method that reallocates the home of a distributed-shared object, synchronized method migration that allows the remote execution of a synchronized method at the home node of its synchronized object, and object pushing that uses the object connectivity information to improve access locality.","PeriodicalId":393916,"journal":{"name":"Proceedings International Conference on Parallel Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2002-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123041717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}