Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287727
Kazuhiko Kato, A. Narita, S. Inohara, T. Masuda
The authors propose an information management system providing distribution and persistency. By separating context from virtual address space, the system has a unified approach for both distribution and persistency. The former is achieved by moving contents between sites and the latter by moving contents between virtual address space and persistent storage. Contents include any information including data, program, and even the state of execution of a program. Contents are stored persistently in a logical space termed the distributed shared repository (DSR). A programming model for the DSR is proposed. Using the model, persistency, fine-grain mobility of information, and the passing of various distributed parameters can be obtained. The implementation anti experimental performance of the system are also presented.<>
{"title":"Distributed shared repository: a unified approach to distribution and persistency","authors":"Kazuhiko Kato, A. Narita, S. Inohara, T. Masuda","doi":"10.1109/ICDCS.1993.287727","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287727","url":null,"abstract":"The authors propose an information management system providing distribution and persistency. By separating context from virtual address space, the system has a unified approach for both distribution and persistency. The former is achieved by moving contents between sites and the latter by moving contents between virtual address space and persistent storage. Contents include any information including data, program, and even the state of execution of a program. Contents are stored persistently in a logical space termed the distributed shared repository (DSR). A programming model for the DSR is proposed. Using the model, persistency, fine-grain mobility of information, and the passing of various distributed parameters can be obtained. The implementation anti experimental performance of the system are also presented.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287728
L. Borrmann, S. Noureddine
Chorus is a micro-kernel-based distributed operating system architecture. The authors explore the architectural and implementational issues involved in constructing a distributed paging service in the Chorus environment. Apart from outlining the pager architecture, they provide insight into how the characteristic goals of a critical distributed application on top of the Chorus system may be put into practice. The respective Chorus features are thereby judged in view of their suitability with respect to the pager implementation. The results of an experimental evaluation of the pager are included.<>
{"title":"A subsystem for swapping and mapped file I/O on top of Chorus","authors":"L. Borrmann, S. Noureddine","doi":"10.1109/ICDCS.1993.287728","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287728","url":null,"abstract":"Chorus is a micro-kernel-based distributed operating system architecture. The authors explore the architectural and implementational issues involved in constructing a distributed paging service in the Chorus environment. Apart from outlining the pager architecture, they provide insight into how the characteristic goals of a critical distributed application on top of the Chorus system may be put into practice. The respective Chorus features are thereby judged in view of their suitability with respect to the pager implementation. The results of an experimental evaluation of the pager are included.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132963798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287676
Hwa-Chun Lin, C. Raghavendra
Exact performance analyses of dynamic load-balancing policies for distributed systems are very difficult because the state space is multidimensional and load-balancing decisions are state-dependent. A state-aggregation method is proposed to analyze the performance of dynamic load-balancing policies. Those states with the same number of jobs are aggregated into a single state. The number of jobs in the system is modeled by a birth-death Markov process. The state transition rates are estimated by an iterative procedure. The proposed state-aggregation method is applied to analyze the performance of a particular dynamic load-balancing policy, namely a symmetric policy with threshold value equal to one. Extensive simulations were performed to study the accuracy of the state-aggregation method. This method provides accurate performance estimates for the symmetric policy for systems of various sizes when the mean job transfer delay is small compared to the average job service time.<>
{"title":"A state-aggregation method for analyzing dynamic load-balancing policies","authors":"Hwa-Chun Lin, C. Raghavendra","doi":"10.1109/ICDCS.1993.287676","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287676","url":null,"abstract":"Exact performance analyses of dynamic load-balancing policies for distributed systems are very difficult because the state space is multidimensional and load-balancing decisions are state-dependent. A state-aggregation method is proposed to analyze the performance of dynamic load-balancing policies. Those states with the same number of jobs are aggregated into a single state. The number of jobs in the system is modeled by a birth-death Markov process. The state transition rates are estimated by an iterative procedure. The proposed state-aggregation method is applied to analyze the performance of a particular dynamic load-balancing policy, namely a symmetric policy with threshold value equal to one. Extensive simulations were performed to study the accuracy of the state-aggregation method. This method provides accurate performance estimates for the symmetric policy for systems of various sizes when the mean job transfer delay is small compared to the average job service time.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116135509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287703
N. Vaidya, D. Pradhan
The authors consider a system consisting of a sender that wants to send a value to certain receivers. Byzantine agreement protocols have previously been proposed to achieve this in the presence of arbitrary failures. The imposed requirement typically is that the fault-free receivers must all agree on the same value. An agreement protocol is proposed that achieves Lamport's Byzantine agreement (L. Lamport et al., 1982) up to a certain number of faults and a degraded form of agreement with a higher number of faults. The degraded form of agreement allows the fault-free receivers to agree on at most two different values, one of which is necessarily the default value. The proposed approach is named degradable agreement. An algorithm for degradable agreement is presented along with bounds on the number of nodes and network connectivity necessary to achieve degradable agreement.<>
作者考虑了一个由发送者组成的系统,发送者想要发送一个值给特定的接收者。拜占庭协议协议以前曾被提议在存在任意故障的情况下实现这一点。强加的要求通常是无故障接收器必须都同意相同的值。提出了一种协议协议,实现了Lamport的拜占庭协议(L. Lamport et al., 1982)达到一定数量的错误和具有更高数量错误的降级协议形式。协议的降级形式允许无故障的接收方最多同意两个不同的值,其中一个必须是默认值。该方法被命名为可降解协议。提出了一种可降解协议算法,并给出了实现可降解协议所需的节点数和网络连通性的界限。
{"title":"Degradable agreement in the presence of Byzantine faults","authors":"N. Vaidya, D. Pradhan","doi":"10.1109/ICDCS.1993.287703","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287703","url":null,"abstract":"The authors consider a system consisting of a sender that wants to send a value to certain receivers. Byzantine agreement protocols have previously been proposed to achieve this in the presence of arbitrary failures. The imposed requirement typically is that the fault-free receivers must all agree on the same value. An agreement protocol is proposed that achieves Lamport's Byzantine agreement (L. Lamport et al., 1982) up to a certain number of faults and a degraded form of agreement with a higher number of faults. The degraded form of agreement allows the fault-free receivers to agree on at most two different values, one of which is necessarily the default value. The proposed approach is named degradable agreement. An algorithm for degradable agreement is presented along with bounds on the number of nodes and network connectivity necessary to achieve degradable agreement.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123841732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287686
A. Gopal, K. Perry
A model and a notation are developed for specifying the composition of concurrent programs. The work is based on the observation that the composition of concurrent programs often requires not only intraprocessor coordination but also interprocessor coordination. A notation is developed for explicitly specifying both forms of coordination within a single uniform framework. Much prior work has either ignored the interprocessor coordination aspects of composition, or treated it in a manner separate from the intraprocessor coordination aspects.<>
{"title":"Composition of concurrent programs","authors":"A. Gopal, K. Perry","doi":"10.1109/ICDCS.1993.287686","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287686","url":null,"abstract":"A model and a notation are developed for specifying the composition of concurrent programs. The work is based on the observation that the composition of concurrent programs often requires not only intraprocessor coordination but also interprocessor coordination. A notation is developed for explicitly specifying both forms of coordination within a single uniform framework. Much prior work has either ignored the interprocessor coordination aspects of composition, or treated it in a manner separate from the intraprocessor coordination aspects.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117031296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287691
S. Kamat, Wei Zhao
When designing real-time communication protocols, the primary objective is to guarantee the deadlines of synchronous messages while sustaining a high aggregate throughput. The authors compare two token ring protocols for their suitability in hard-real-time systems. A priority driven protocol (e.g., IEEE 802.5) allows implementation of a priority based real-time scheduling discipline like the rate monotonic algorithm. A timed token protocol (e.g., FDDI) provides guaranteed bandwidth and bounded access time for synchronous messages. These two protocols are studied by deriving their schedulability criteria, i.e., the conditions which determine whether a given message set can be guaranteed. Using these criteria, the average performance of these protocols is evaluated under different operating conditions. It is observed that neither protocol dominates the other for the entire range of system parameter space. The conclusion is that the priority driven protocol performs better at low bandwidths (1-10 Mb/s) while the timed token protocol has a superior performance at higher bandwidths.<>
{"title":"Real-time schedulability of two token ring protocols","authors":"S. Kamat, Wei Zhao","doi":"10.1109/ICDCS.1993.287691","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287691","url":null,"abstract":"When designing real-time communication protocols, the primary objective is to guarantee the deadlines of synchronous messages while sustaining a high aggregate throughput. The authors compare two token ring protocols for their suitability in hard-real-time systems. A priority driven protocol (e.g., IEEE 802.5) allows implementation of a priority based real-time scheduling discipline like the rate monotonic algorithm. A timed token protocol (e.g., FDDI) provides guaranteed bandwidth and bounded access time for synchronous messages. These two protocols are studied by deriving their schedulability criteria, i.e., the conditions which determine whether a given message set can be guaranteed. Using these criteria, the average performance of these protocols is evaluated under different operating conditions. It is observed that neither protocol dominates the other for the entire range of system parameter space. The conclusion is that the priority driven protocol performs better at low bandwidths (1-10 Mb/s) while the timed token protocol has a superior performance at higher bandwidths.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"30 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120942785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287684
K. Kim, E. Shokri
Decentralized approaches to processor-group maintenance (GMM) are aimed at facilitating every active node in a real-time LAN system to maintain timely and consistent knowledge about the health status of all cooperating nodes and to recognize newly joining nodes. A practical scheme for this decentralized GMM (DGMM) in TDMA (time division multiple access) bus based real-time LAN systems, called here the periodic reception history broadcast (PRHB) scheme, was initially formulated by H. Kopetz et al. (1989) for application environments where the fault frequency is relatively low such that no more than one node fails in any interval of two TDMA cycle duration. The authors develop a major extension of the scheme, PRHB with multiple fault detection (PRHB/MD), which is applicable to environments where the fault frequency is much higher-to be more specific, where up to a half of the nodes map experience faults within any interval of three TDMA cycle duration. The scheme does not impose any limit on the number of transient faults of links that any one node may experience. The scheme yields the minimal detection delay for all major fault types and the delay does not exceed two TDMA cycles for the worst fault type. This detection delay characteristic is a significant improvement over those of previously developed DGMM schemes.<>
{"title":"Minimal-delay decentralized maintenance of processor-group membership in TDMA-bus LAN systems","authors":"K. Kim, E. Shokri","doi":"10.1109/ICDCS.1993.287684","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287684","url":null,"abstract":"Decentralized approaches to processor-group maintenance (GMM) are aimed at facilitating every active node in a real-time LAN system to maintain timely and consistent knowledge about the health status of all cooperating nodes and to recognize newly joining nodes. A practical scheme for this decentralized GMM (DGMM) in TDMA (time division multiple access) bus based real-time LAN systems, called here the periodic reception history broadcast (PRHB) scheme, was initially formulated by H. Kopetz et al. (1989) for application environments where the fault frequency is relatively low such that no more than one node fails in any interval of two TDMA cycle duration. The authors develop a major extension of the scheme, PRHB with multiple fault detection (PRHB/MD), which is applicable to environments where the fault frequency is much higher-to be more specific, where up to a half of the nodes map experience faults within any interval of three TDMA cycle duration. The scheme does not impose any limit on the number of transient faults of links that any one node may experience. The scheme yields the minimal detection delay for all major fault types and the delay does not exceed two TDMA cycles for the worst fault type. This detection delay characteristic is a significant improvement over those of previously developed DGMM schemes.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127358594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287720
S. Radia, J. Pachl
Many different kinds of names (identifiers) are used in computer systems. Names are resolved (interpreted) in a context. A context is a function that maps names to entities. Multiple contexts allow the flexibility of giving different meanings to a name in different parts of the system; however, there are situations where it is desirable for the meaning of a name to be the same in different parts. This property is called coherence in naming. Since the meaning of a name depends on the context selected, the analysis of coherence is based on the notion of closure mechanisms-implicit rules that select a context for resolving names. The authors define coherence and show how it is affected by various closure mechanisms. Then they present several approaches for dealing with the lack of coherence. Incoherence arises from selecting an incorrect context, and consequently, closure mechanisms are involved in the solutions.<>
{"title":"Coherence in naming in distributed computing environments","authors":"S. Radia, J. Pachl","doi":"10.1109/ICDCS.1993.287720","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287720","url":null,"abstract":"Many different kinds of names (identifiers) are used in computer systems. Names are resolved (interpreted) in a context. A context is a function that maps names to entities. Multiple contexts allow the flexibility of giving different meanings to a name in different parts of the system; however, there are situations where it is desirable for the meaning of a name to be the same in different parts. This property is called coherence in naming. Since the meaning of a name depends on the context selected, the analysis of coherence is based on the notion of closure mechanisms-implicit rules that select a context for resolving names. The authors define coherence and show how it is affected by various closure mechanisms. Then they present several approaches for dealing with the lack of coherence. Incoherence arises from selecting an incorrect context, and consequently, closure mechanisms are involved in the solutions.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127475023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287709
F. Tuijnman, H. Afsarmanesh
For distributed computing environments, required for computer integrated manufacturing and other engineering applications, it is most important to support the sharing and exchange of complex objects among cooperating sites, while preserving their autonomy. Specification of complex objects and their object boundaries in a federated database are described. Each database, as well as the entire federation, is modeled as a collection of related objects. Complex objects are defined as subgraphs of the entire object base and are specified by a root object and a collection of paths. A complex object can be distributed over several sites. A method is described that ensures referential integrity while maintaining the autonomy of each database. Different linearization techniques of complex objects are supported to enable applications to retrieve complex objects as single entities. This model is implemented in PEER, a federated, object-oriented database system developed for engineering applications.<>
{"title":"Sharing complex objects in a distributed PEER environment","authors":"F. Tuijnman, H. Afsarmanesh","doi":"10.1109/ICDCS.1993.287709","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287709","url":null,"abstract":"For distributed computing environments, required for computer integrated manufacturing and other engineering applications, it is most important to support the sharing and exchange of complex objects among cooperating sites, while preserving their autonomy. Specification of complex objects and their object boundaries in a federated database are described. Each database, as well as the entire federation, is modeled as a collection of related objects. Complex objects are defined as subgraphs of the entire object base and are specified by a root object and a collection of paths. A complex object can be distributed over several sites. A method is described that ensures referential integrity while maintaining the autonomy of each database. Different linearization techniques of complex objects are supported to enable applications to retrieve complex objects as single entities. This model is implemented in PEER, a federated, object-oriented database system developed for engineering applications.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114409676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1993-05-25DOI: 10.1109/ICDCS.1993.287701
L. Gunaseelan, R. LeBlanc
Past research has concentrated on ordering events in a system where processes communicate through messages. The authors look at issues in ordering events in a distributed system based on shared objects that interact via remote procedure calls (RPCs). They derive clock conditions for ordering operations on an object and provide clock maintenance schemes for time-stamping execution events. An object clock is associated with every shared object for clock exchange among processes. A clock maintenance algorithm is incrementally presented for objects where operations are atomic and an algorithm is described for large-grained objects where operations are nested and non-atomic.<>
{"title":"Event ordering in a shared memory distributed system","authors":"L. Gunaseelan, R. LeBlanc","doi":"10.1109/ICDCS.1993.287701","DOIUrl":"https://doi.org/10.1109/ICDCS.1993.287701","url":null,"abstract":"Past research has concentrated on ordering events in a system where processes communicate through messages. The authors look at issues in ordering events in a distributed system based on shared objects that interact via remote procedure calls (RPCs). They derive clock conditions for ordering operations on an object and provide clock maintenance schemes for time-stamping execution events. An object clock is associated with every shared object for clock exchange among processes. A clock maintenance algorithm is incrementally presented for objects where operations are atomic and an algorithm is described for large-grained objects where operations are nested and non-atomic.<<ETX>>","PeriodicalId":249060,"journal":{"name":"[1993] Proceedings. The 13th International Conference on Distributed Computing Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126026936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}