Pub Date : 2001-11-27DOI: 10.1088/0967-1846/2/2/004
M. Nikolaidou, D. Lelis, D. Mouzakis, P. Georgiadis
As the use of Distributed Systems is spreading widely and relevant applications become more demanding, efficient design of Distributed Systems has turned to be a critical issue. For achieving the desirable integration of Distributed System components, knowledge from different areas must be combined leading to increasing complexity. Construction and provision of the appropriate software tools may facilitate the design and evaluation of Distributed Systems architectures. In this paper the architecture and functionality of the Intelligent Distributed System Design tool (IDIS) are presented. IDIS integrates methodologies and techniques from the Artificial Intelligence and Simulation domain, in order to provide a uniform environment for proposing alternative architectural solutions and evaluating their performance.
{"title":"Disciplined approach towards the design of distributed systems","authors":"M. Nikolaidou, D. Lelis, D. Mouzakis, P. Georgiadis","doi":"10.1088/0967-1846/2/2/004","DOIUrl":"https://doi.org/10.1088/0967-1846/2/2/004","url":null,"abstract":"As the use of Distributed Systems is spreading widely and relevant applications become more demanding, efficient design of Distributed Systems has turned to be a critical issue. For achieving the desirable integration of Distributed System components, knowledge from different areas must be combined leading to increasing complexity. Construction and provision of the appropriate software tools may facilitate the design and evaluation of Distributed Systems architectures. In this paper the architecture and functionality of the Intelligent Distributed System Design tool (IDIS) are presented. IDIS integrates methodologies and techniques from the Artificial Intelligence and Simulation domain, in order to provide a uniform environment for proposing alternative architectural solutions and evaluating their performance.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133853257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-12-01DOI: 10.1088/0967-1846/6/4/301
J. Silcock, A. Goscinski
An analysis of the distributed shared memory (DSM) work carried out by other researchers shows that it has been able to improve the performance of applications, at the expense of ease of programming and use. Many implementations require application programmers to write code to explicitly associate shared variables with synchronization variables or to label the variables according to their access patterns. Programmers are required to explicitly initialize parallel applications and, in particular, to create DSM parallel processes on a number of workstations in the cluster of workstations. The aim of this research has been to improve the ease of programming and use of a DSM system while not compromising its performance. RHODOS' DSM allows programmers to write shared memory code exploiting their sequential programming skills without the need to learn the DSM concepts. The placement of DSM within the operating system allows the DSM environment to be automatically initialized and transparent. The results of running two applications demonstrate that our DSM, despite paying attention to ease of programming and use, achieves high performance.
{"title":"A comprehensive distributed shared memory system that is easy to use and program","authors":"J. Silcock, A. Goscinski","doi":"10.1088/0967-1846/6/4/301","DOIUrl":"https://doi.org/10.1088/0967-1846/6/4/301","url":null,"abstract":"An analysis of the distributed shared memory (DSM) work carried out by other researchers shows that it has been able to improve the performance of applications, at the expense of ease of programming and use. Many implementations require application programmers to write code to explicitly associate shared variables with synchronization variables or to label the variables according to their access patterns. Programmers are required to explicitly initialize parallel applications and, in particular, to create DSM parallel processes on a number of workstations in the cluster of workstations. The aim of this research has been to improve the ease of programming and use of a DSM system while not compromising its performance. RHODOS' DSM allows programmers to write shared memory code exploiting their sequential programming skills without the need to learn the DSM concepts. The placement of DSM within the operating system allows the DSM environment to be automatically initialized and transparent. The results of running two applications demonstrate that our DSM, despite paying attention to ease of programming and use, achieves high performance.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131302183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-12-01DOI: 10.1088/0967-1846/6/4/303
A. Zisman, J. Kramer
In this paper we present an approach to support interoperation between autonomous database systems. In particular, we concentrate on distributed information discovery and access for systems with a large number of databases. We avoid the need for integrated global schemas or centralized structures containing information on the available data and its location. We instead provide an architecture that supports data distribution, autonomy and heterogeneity. The architecture also supports system evolution by the addition and removal of databases. A distributed information discovery algorithm is provided to perform data requests, database location and data access. A feature of our approach is to distribute the information about database contents using simple hierarchical information structures composed of special terms. A prototype has been developed to demonstrate and evaluate the approach. A hospital case study is used to illustrate its feasibility and applicability.
{"title":"An approach to interoperation between autonomous database systems","authors":"A. Zisman, J. Kramer","doi":"10.1088/0967-1846/6/4/303","DOIUrl":"https://doi.org/10.1088/0967-1846/6/4/303","url":null,"abstract":"In this paper we present an approach to support interoperation between autonomous database systems. In particular, we concentrate on distributed information discovery and access for systems with a large number of databases. We avoid the need for integrated global schemas or centralized structures containing information on the available data and its location. We instead provide an architecture that supports data distribution, autonomy and heterogeneity. The architecture also supports system evolution by the addition and removal of databases. A distributed information discovery algorithm is provided to perform data requests, database location and data access. A feature of our approach is to distribute the information about database contents using simple hierarchical information structures composed of special terms. A prototype has been developed to demonstrate and evaluate the approach. A hospital case study is used to illustrate its feasibility and applicability.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"110 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-12-01DOI: 10.1088/0967-1846/6/4/302
L. Burness, Richard Titmuss, C. Lebre, K. Brown, A. Brookland
The use of new computing paradigms is intended to ease the design of complex systems. However, the non-functional aspects of a system, including performance, reliability and scalability, remain significant issues. It is hard to detect and correct many scalability problems through system testing alone - especially when the problems are rooted in the higher levels of the system design. Late corrections to the system can have serious implications for the clarity of the design and code. We have analysed the design of a system of multiple near-identical, `reactive' agents for scalability. We believe that the approach taken is readily applicable to many object oriented systems, and may form the basis of a rigorous design methodology. It is a simple, yet scientific extension to current design techniques using message sequence charts, enabling design options to be compared quantitatively rather than qualitatively. Our experience suggests that such analysis should be used to consider the effect of artificial intelligence, to ensure that autonomous behaviour has an overall beneficial effect for system performance.
{"title":"Scalability evaluation of a distributed agent system","authors":"L. Burness, Richard Titmuss, C. Lebre, K. Brown, A. Brookland","doi":"10.1088/0967-1846/6/4/302","DOIUrl":"https://doi.org/10.1088/0967-1846/6/4/302","url":null,"abstract":"The use of new computing paradigms is intended to ease the design of complex systems. However, the non-functional aspects of a system, including performance, reliability and scalability, remain significant issues. It is hard to detect and correct many scalability problems through system testing alone - especially when the problems are rooted in the higher levels of the system design. Late corrections to the system can have serious implications for the clarity of the design and code. We have analysed the design of a system of multiple near-identical, `reactive' agents for scalability. We believe that the approach taken is readily applicable to many object oriented systems, and may form the basis of a rigorous design methodology. It is a simple, yet scientific extension to current design techniques using message sequence charts, enabling design options to be compared quantitatively rather than qualitatively. Our experience suggests that such analysis should be used to consider the effect of artificial intelligence, to ensure that autonomous behaviour has an overall beneficial effect for system performance.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132626583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-09-01DOI: 10.1088/0967-1846/6/3/303
Michael A. Iverson, F. Özgüner
With the advent of large-scale heterogeneous environments, there is a need for matching and scheduling algorithms which can allow multiple, directed acyclic graph structured applications to share the computational resources of the network. This paper presents a hierarchical matching and scheduling framework where multiple applications compete for the computational resources on the network. In this environment, each application makes its own scheduling decisions. Thus, no centralized scheduling resource is required. Applications do not need direct knowledge of the other applications - knowledge of other applications arrives indirectly through load estimates (like queue lengths). This paper presents an algorithm, called the dynamic hierarchical scheduling algorithm, which schedules tasks within this framework. A series of simulations are presented to examine the performance of these algorithms in this environment, compared with a more conventional, single-user environment.
{"title":"Hierarchical, competitive scheduling of multiple DAGs in a dynamic heterogeneous environment","authors":"Michael A. Iverson, F. Özgüner","doi":"10.1088/0967-1846/6/3/303","DOIUrl":"https://doi.org/10.1088/0967-1846/6/3/303","url":null,"abstract":"With the advent of large-scale heterogeneous environments, there is a need for matching and scheduling algorithms which can allow multiple, directed acyclic graph structured applications to share the computational resources of the network. This paper presents a hierarchical matching and scheduling framework where multiple applications compete for the computational resources on the network. In this environment, each application makes its own scheduling decisions. Thus, no centralized scheduling resource is required. Applications do not need direct knowledge of the other applications - knowledge of other applications arrives indirectly through load estimates (like queue lengths). This paper presents an algorithm, called the dynamic hierarchical scheduling algorithm, which schedules tasks within this framework. A series of simulations are presented to examine the performance of these algorithms in this environment, compared with a more conventional, single-user environment.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116461703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-09-01DOI: 10.1088/0967-1846/6/3/302
M. Hiltunen, Vijaykumar Immanuel, R. Schlichting
The cost of employing software fault tolerance techniques in distributed systems is strongly related to the type of failures to be tolerated. For example, in terms of the amount of redundancy required and execution time, tolerating a processor crash is much cheaper than tolerating arbitrary (or Byzantine) failures. This paper describes an approach to constructing configurable services for distributed systems that allows easy customization of the type of failures to tolerate. Using this approach, it is possible to configure custom services across a spectrum of possibilities, from a very efficient but unreliable server group that does not tolerate any failures, to a less efficient but reliable group that tolerates crash, omission, timing, or arbitrary failures. The approach is based on building configurable services as collections of software modules called micro-protocols. Each micro-protocol implements a different semantic property or property variant, and interacts with other micro-protocols using an event-driven model provided by a runtime system. In addition to facilitating the choice of failure model, the approach allows service properties such as message ordering and delivery atomicity to be customized for each application.
{"title":"Supporting customized failure models for distributed software","authors":"M. Hiltunen, Vijaykumar Immanuel, R. Schlichting","doi":"10.1088/0967-1846/6/3/302","DOIUrl":"https://doi.org/10.1088/0967-1846/6/3/302","url":null,"abstract":"The cost of employing software fault tolerance techniques in distributed systems is strongly related to the type of failures to be tolerated. For example, in terms of the amount of redundancy required and execution time, tolerating a processor crash is much cheaper than tolerating arbitrary (or Byzantine) failures. This paper describes an approach to constructing configurable services for distributed systems that allows easy customization of the type of failures to tolerate. Using this approach, it is possible to configure custom services across a spectrum of possibilities, from a very efficient but unreliable server group that does not tolerate any failures, to a less efficient but reliable group that tolerates crash, omission, timing, or arbitrary failures. The approach is based on building configurable services as collections of software modules called micro-protocols. Each micro-protocol implements a different semantic property or property variant, and interacts with other micro-protocols using an event-driven model provided by a runtime system. In addition to facilitating the choice of failure model, the approach allows service properties such as message ordering and delivery atomicity to be customized for each application.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121091988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-09-01DOI: 10.1088/0967-1846/6/3/301
M. Raynal, F. Tronel
A group membership failure (in short, a group failure) occurs when one of the group members crashes. A group failure detection protocol has to inform all the non-crashed members of the group that this group entity has crashed. Ideally, such a protocol should be live (if a process crashes, then the group failure has to be detected) and safe (if a group failure is claimed, then at least one process has crashed). Unreliable asynchronous distributed systems are characterized by the impossibility for a process to get an accurate view of the system state. Consequently, the design of a group failure detection protocol that is both safe and live is a problem that cannot be solved in all runs of an asynchronous distributed system. This paper analyses a group failure detection protocol whose design naturally ensures its liveness. We show that by appropriately tuning some of its duration-related parameters, the safety property can be guaranteed with a probability as close to one as desired. This analysis shows that, in real distributed systems, it is possible to achieve failure detection with a negligible probability of wrong suspicions.
{"title":"Group membership failure detection: a simple protocol and its probabilistic analysis","authors":"M. Raynal, F. Tronel","doi":"10.1088/0967-1846/6/3/301","DOIUrl":"https://doi.org/10.1088/0967-1846/6/3/301","url":null,"abstract":"A group membership failure (in short, a group failure) occurs when one of the group members crashes. A group failure detection protocol has to inform all the non-crashed members of the group that this group entity has crashed. Ideally, such a protocol should be live (if a process crashes, then the group failure has to be detected) and safe (if a group failure is claimed, then at least one process has crashed). Unreliable asynchronous distributed systems are characterized by the impossibility for a process to get an accurate view of the system state. Consequently, the design of a group failure detection protocol that is both safe and live is a problem that cannot be solved in all runs of an asynchronous distributed system. This paper analyses a group failure detection protocol whose design naturally ensures its liveness. We show that by appropriately tuning some of its duration-related parameters, the safety property can be guaranteed with a probability as close to one as desired. This analysis shows that, in real distributed systems, it is possible to achieve failure detection with a negligible probability of wrong suspicions.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"681 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131857361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-09-01DOI: 10.1088/0967-1846/6/6/93
C. Fetzer
We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potenti
组成员检测问题由活动条件(L)和安全属性(S)指定:(L)如果进程p崩溃,则最终每个未崩溃的进程q都必须怀疑p已经崩溃;(S)如果进程q怀疑p,那么p确实崩溃了。可以证明(L)或(S)是可实现的,但不能在异步系统中同时实现(L)和(S)。在实践中,只需要实现(L)和(S),使得违反(L)或(S)的概率变得可以忽略不计。Raynal和Tronel提出并分析了一个协议,该协议可以确定地实现(L),并且可以进行调整,使(S)被违反的可能性变得可以忽略不计。为异步系统设计和实现分布式容错协议是一项困难但并非不可能完成的任务。容错协议必须检测和屏蔽某些故障类,例如崩溃故障和消息遗漏故障。在容错协议的性能和协议可以容忍的故障类别之间存在权衡。人们希望容忍尽可能多的故障类,以满足协议[1]的随机要求,同时仍然保持足够的性能。由于协议的客户端在性能/容错权衡方面有不同的需求,因此希望能够定制协议,以便选择适当的性能/容错权衡。在这个特殊的章节中,Hiltunen等人描述了如何在Cactus系统中使用微协议组成协议。它们展示了如何根据客户端的需求定制组RPC系统。特别是,它们展示了考虑额外的故障类如何影响组RPC系统的性能。参考文献[1]Cristian F 1991理解容错分布式系统ACM通信34 (2)56-78 [2]Heimerdinger W L和Weinstock C B 1992系统容错的概念框架技术报告92-TR-33, CMU/SEI [3] Laprie J C(编)1992可靠性:基本概念和术语(维也纳:施普林格)
{"title":"Guest Editor's Introduction: Special section on dependable distributed systems","authors":"C. Fetzer","doi":"10.1088/0967-1846/6/6/93","DOIUrl":"https://doi.org/10.1088/0967-1846/6/6/93","url":null,"abstract":"We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potenti","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131901492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-01DOI: 10.1088/0967-1846/6/2/301
M. Ahamad, M. Raynal, G. Thia-Kime
This paper explores causally consistent distributed services when multiple related services are replicated to meet performance and availability requirements. This consistency criterion is particularly well suited for distributed services such as cooperative document sharing, and it is attractive because of the efficient implementations that are allowed by it. A new protocol for implementing causally consistent services is presented. It allows service instances to be created and deleted dynamically according to service access patterns in the distributed system. It also handles the case where different but related services are replicated independently. Another novel aspect of this protocol lies in its ability to use both push and pull mechanisms for disseminating updates to objects that encapsulate service state.
{"title":"An adaptive architecture for causally consistent distributed services","authors":"M. Ahamad, M. Raynal, G. Thia-Kime","doi":"10.1088/0967-1846/6/2/301","DOIUrl":"https://doi.org/10.1088/0967-1846/6/2/301","url":null,"abstract":"This paper explores causally consistent distributed services when multiple related services are replicated to meet performance and availability requirements. This consistency criterion is particularly well suited for distributed services such as cooperative document sharing, and it is attractive because of the efficient implementations that are allowed by it. A new protocol for implementing causally consistent services is presented. It allows service instances to be created and deleted dynamically according to service access patterns in the distributed system. It also handles the case where different but related services are replicated independently. Another novel aspect of this protocol lies in its ability to use both push and pull mechanisms for disseminating updates to objects that encapsulate service state.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122999958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-01DOI: 10.1088/0967-1846/6/2/303
Nicole Dunlop, J. Indulska, K. Raymond
Modern architectures for distributed object environments (or distributed `middleware') are revealing an increasing trend towards standardization. The recent emergence of a standard for open distributed processing, the ISO/IEC Reference Model for Open Distributed Processing (RM-ODP) (ITU-T Recommendation X.901) and the coincidence of the development of the Object Management Group's Common Object Request Broker Architecture (CORBA), has prompted us to explore the relationship between these architectures. This paper analyses the CORBA architecture as a support environment for open distributed processing by comparing the business requirements for ODP, RM-ODP viewpoints, functions and distribution transparencies as specified in RM-ODP (ITU-T Recommendations X.901-4) with the CORBA architecture. Through this examination it is evident that despite distinctly divergent terminology, there exist significant parallels between CORBA and RM-ODP.
{"title":"CORBA and RM-ODP: parallel or divergent?","authors":"Nicole Dunlop, J. Indulska, K. Raymond","doi":"10.1088/0967-1846/6/2/303","DOIUrl":"https://doi.org/10.1088/0967-1846/6/2/303","url":null,"abstract":"Modern architectures for distributed object environments (or distributed `middleware') are revealing an increasing trend towards standardization. The recent emergence of a standard for open distributed processing, the ISO/IEC Reference Model for Open Distributed Processing (RM-ODP) (ITU-T Recommendation X.901) and the coincidence of the development of the Object Management Group's Common Object Request Broker Architecture (CORBA), has prompted us to explore the relationship between these architectures. This paper analyses the CORBA architecture as a support environment for open distributed processing by comparing the business requirements for ODP, RM-ODP viewpoints, functions and distribution transparencies as specified in RM-ODP (ITU-T Recommendations X.901-4) with the CORBA architecture. Through this examination it is evident that despite distinctly divergent terminology, there exist significant parallels between CORBA and RM-ODP.","PeriodicalId":404872,"journal":{"name":"Distributed Syst. Eng.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131212304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}