Y. Hashi, Kazuyoshi Matsumoto, Y. Seki, M. Hiji, Toru Abe, T. Suganuma
ICT supports smart communities in their aim to build efficient and sustainable social infrastructure. To realize a smart community, it is necessary to manage and analyze data about the community including large volumes of sensing data, meta-data, as well as information on data sources and consent for use, all of which are interrelated. We propose a data management scheme capable of both high-speed search of large volumes of data for analysis, and flexible search of data which changes depending on the collection environment. A major characteristic of our scheme is that it combines a schema-free, document-oriented database and an graph database suited for flexible search. We implement proposed data management scheme, and evaluate a performance of the search for sensing data. As the result, searching time of large volumes of sensing data is very high-speed. We believe that proposed data management scheme is able to minimize the the time required for analysis.
{"title":"Design and Implementation of Data Management Scheme to Enable Efficient Analysis of Sensing Data","authors":"Y. Hashi, Kazuyoshi Matsumoto, Y. Seki, M. Hiji, Toru Abe, T. Suganuma","doi":"10.1109/ICAC.2015.58","DOIUrl":"https://doi.org/10.1109/ICAC.2015.58","url":null,"abstract":"ICT supports smart communities in their aim to build efficient and sustainable social infrastructure. To realize a smart community, it is necessary to manage and analyze data about the community including large volumes of sensing data, meta-data, as well as information on data sources and consent for use, all of which are interrelated. We propose a data management scheme capable of both high-speed search of large volumes of data for analysis, and flexible search of data which changes depending on the collection environment. A major characteristic of our scheme is that it combines a schema-free, document-oriented database and an graph database suited for flexible search. We implement proposed data management scheme, and evaluate a performance of the search for sensing data. As the result, searching time of large volumes of sensing data is very high-speed. We believe that proposed data management scheme is able to minimize the the time required for analysis.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"93 1","pages":"319-324"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78969826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose HiSML, a high-level integrated service monitoring language. The language is designed to build monitoring solutions for cloud computing platforms. The primary benefits of HiSML over existing monitoring tools are: 1) it is used to build the monitoring solution from scratch, and the monitored objects are specialized for the target platform, 2) it integrally monitors services in all layers of cloud computing platforms: infrastructure layer, platform layer and software layer, 3) it allows programmers to describe the dependency between monitored services to guide analysis on the collected data, 4) it allows programmers to manually store and backup the monitored data, 5) it supports hybrid programming with other programming languages to assist the adaptive management of cloud computing platforms.
{"title":"HiSML: A High-Level Integrated Service Monitoring Language","authors":"Xinkui Zhao, Jianwei Yin, Pengxiang Lin, Zuoning Chen","doi":"10.1109/ICAC.2015.13","DOIUrl":"https://doi.org/10.1109/ICAC.2015.13","url":null,"abstract":"In this paper, we propose HiSML, a high-level integrated service monitoring language. The language is designed to build monitoring solutions for cloud computing platforms. The primary benefits of HiSML over existing monitoring tools are: 1) it is used to build the monitoring solution from scratch, and the monitored objects are specialized for the target platform, 2) it integrally monitors services in all layers of cloud computing platforms: infrastructure layer, platform layer and software layer, 3) it allows programmers to describe the dependency between monitored services to guide analysis on the collected data, 4) it allows programmers to manually store and backup the monitored data, 5) it supports hybrid programming with other programming languages to assist the adaptive management of cloud computing platforms.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"62 1","pages":"137-138"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86107191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Performance degradation due to imperfect isolation of hardware resources such as cache, network, and I/O has been a frequent occurrence in public cloud platforms. A web server that is suffering from performance interference degrades interactive user experience and results in lost revenues. Existing work on interference mitigation tries to address this problem by intrusive changes to the hyper visor, e.g., Using intelligent schedulers or live migration, many of which are available only to infrastructure providers and not end consumers. In this paper, we present a framework for administering web server clusters where effects of interference can be reduced by intelligent reconfiguration. Our controller, ICE, improves web server performance during interference by performing two-fold autonomous reconfigurations. First, it reconfigures the load balancer at the ingress point of the server cluster and thus reduces load on the impacted server. ICE then reconfigures the middleware at the impacted server to reduce its load even further. We implement and evaluate ICE on Cloud Suite, a popular web application benchmark, and with two popular load balancers - HA Proxy and LVS. Our experiments in a private cloud test bed show that ICE can improve median response time of web servers by up to 94% compared to astatically configured server cluster. ICE also outperforms an adaptive load balancer (using least connection scheduling) by up to 39%.
{"title":"ICE: An Integrated Configuration Engine for Interference Mitigation in Cloud Services","authors":"A. Maji, S. Mitra, S. Bagchi","doi":"10.1109/ICAC.2015.48","DOIUrl":"https://doi.org/10.1109/ICAC.2015.48","url":null,"abstract":"Performance degradation due to imperfect isolation of hardware resources such as cache, network, and I/O has been a frequent occurrence in public cloud platforms. A web server that is suffering from performance interference degrades interactive user experience and results in lost revenues. Existing work on interference mitigation tries to address this problem by intrusive changes to the hyper visor, e.g., Using intelligent schedulers or live migration, many of which are available only to infrastructure providers and not end consumers. In this paper, we present a framework for administering web server clusters where effects of interference can be reduced by intelligent reconfiguration. Our controller, ICE, improves web server performance during interference by performing two-fold autonomous reconfigurations. First, it reconfigures the load balancer at the ingress point of the server cluster and thus reduces load on the impacted server. ICE then reconfigures the middleware at the impacted server to reduce its load even further. We implement and evaluate ICE on Cloud Suite, a popular web application benchmark, and with two popular load balancers - HA Proxy and LVS. Our experiments in a private cloud test bed show that ICE can improve median response time of web servers by up to 94% compared to astatically configured server cluster. ICE also outperforms an adaptive load balancer (using least connection scheduling) by up to 39%.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"61 1","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75425378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a novel scalable architecture model for the design of harvesting-aware applications on FPGAs. The objective of the proposed approach is to reduce the additional design complexity inherent to this type of design. The adopted strategy was to adapt the energy consumption of the system by controlling the toggle rate of its signals according to the energy prediction and the performance levels set by the system designer. The architecture model was designed in a Cyclone IV FPGA and its main advantages are: it may be used within a wide range of applications, since it has been modelled to control synchronous systems, it causes a little impact on the project design, as to couple the harvesting-aware subsystem with the application modules does not imply changes in the application source code. In the case study presented, an RGB-YCbCr converter was used as an application in order to validate the implementation data, simulation and results presented in this paper.
本文描述了一种用于fpga采集感知应用设计的新型可扩展体系结构模型。所建议的方法的目标是减少这种类型的设计所固有的额外设计复杂性。所采用的策略是根据系统设计者设定的能量预测和性能水平,通过控制信号的切换率来适应系统的能量消耗。该体系结构模型是在Cyclone IV FPGA中设计的,它的主要优点是:它可以在广泛的应用中使用,因为它已经被建模为控制同步系统,它对项目设计的影响很小,因为将采集感知子系统与应用模块耦合并不意味着在应用程序源代码中进行更改。最后,以RGB-YCbCr变换器为例,对本文的实现数据、仿真结果进行了验证。
{"title":"An Architecture Model for Harvesting-Aware Applications in FPGA","authors":"Marília Lima, Pedro Lazaro A. Santos, C. Araujo","doi":"10.1109/ICAC.2015.19","DOIUrl":"https://doi.org/10.1109/ICAC.2015.19","url":null,"abstract":"This paper describes a novel scalable architecture model for the design of harvesting-aware applications on FPGAs. The objective of the proposed approach is to reduce the additional design complexity inherent to this type of design. The adopted strategy was to adapt the energy consumption of the system by controlling the toggle rate of its signals according to the energy prediction and the performance levels set by the system designer. The architecture model was designed in a Cyclone IV FPGA and its main advantages are: it may be used within a wide range of applications, since it has been modelled to control synchronous systems, it causes a little impact on the project design, as to couple the harvesting-aware subsystem with the application modules does not imply changes in the application source code. In the case study presented, an RGB-YCbCr converter was used as an application in order to validate the implementation data, simulation and results presented in this paper.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"29 1","pages":"153-154"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77431062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time, cost-effective execution of "Big Data" applications on MapReduce clusters has been an important goal for many scientists in recent years. The MapReduce paradigm has been widely adopted by major computing companies as a powerful approach for large-scale data analytics. However, running MapReduce workloads in cluster environments has been particularly challenging due to the trade-offs that exist between the need for performance and the corresponding budget cost. Furthermore, the large number of resource configuration parameters exacerbates the problem, as users must manually tune the parameters without knowing their impact on the performance and budget costs. In this paper, we describe our approach to cost-effective scheduling of MapReduce applications. We present an overview of our framework that enables appropriate configuration of parameters to detect cost-efficient resource allocations. Our early experimental results illustrate the working and benefit of our approach.
{"title":"A Framework for Cost-Effective Scheduling of MapReduce Applications","authors":"Nikos Zacheilas, V. Kalogeraki","doi":"10.1109/ICAC.2015.38","DOIUrl":"https://doi.org/10.1109/ICAC.2015.38","url":null,"abstract":"Real-time, cost-effective execution of \"Big Data\" applications on MapReduce clusters has been an important goal for many scientists in recent years. The MapReduce paradigm has been widely adopted by major computing companies as a powerful approach for large-scale data analytics. However, running MapReduce workloads in cluster environments has been particularly challenging due to the trade-offs that exist between the need for performance and the corresponding budget cost. Furthermore, the large number of resource configuration parameters exacerbates the problem, as users must manually tune the parameters without knowing their impact on the performance and budget costs. In this paper, we describe our approach to cost-effective scheduling of MapReduce applications. We present an overview of our framework that enables appropriate configuration of parameters to detect cost-efficient resource allocations. Our early experimental results illustrate the working and benefit of our approach.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"37 1","pages":"147-148"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78606496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bo Zhang, Filip Krikava, Romain Rouvoy, L. Seinturier
There is a trade-off between the number of concurrently running MapReduce jobs and their corresponding map and reduce tasks within a node in a Hadoop cluster. Leaving this trade-off statically configured to a single value can significantly reduce job response times leaving only sub optimal resource usage. To overcome this problem, we propose a feedback control loop based approach that dynamically adjusts the Hadoop resource manager configuration based on the current state of the cluster. The preliminary assessment based on workloads synthesized from real-world traces shows that the system performance can be improved by about 30% compared to default Hadoop setup.
{"title":"Self-Configuration of the Number of Concurrently Running MapReduce Jobs in a Hadoop Cluster","authors":"Bo Zhang, Filip Krikava, Romain Rouvoy, L. Seinturier","doi":"10.1109/ICAC.2015.54","DOIUrl":"https://doi.org/10.1109/ICAC.2015.54","url":null,"abstract":"There is a trade-off between the number of concurrently running MapReduce jobs and their corresponding map and reduce tasks within a node in a Hadoop cluster. Leaving this trade-off statically configured to a single value can significantly reduce job response times leaving only sub optimal resource usage. To overcome this problem, we propose a feedback control loop based approach that dynamically adjusts the Hadoop resource manager configuration based on the current state of the cluster. The preliminary assessment based on workloads synthesized from real-world traces shows that the system performance can be improved by about 30% compared to default Hadoop setup.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"16 1","pages":"149-150"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84897769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Essentially, cloud computing is based on a scenario where specialized providers offer a huge amount of resources to stakeholders, which require them based on their actual needs. The stakeholders needs may be characterized by significant fluctuations due to the model of payment, which is based on the pay-per-use paradigm. This has an important implication: cloud providers have to make an initial expensive investment in resources. Nowadays, available cloud computing solutions are provided by known players on the IT market such as Google, IBM, Amazon, or Microsoft. Small or medium size players have difficulties to compete in this domain. In this paper, we introduce our vision for cloud computing which aims to (1) exploit adaptive mechanisms for resource management and optimize resource usage by discouraging idle resources, (2) provide a market exchange model exploiting trading mechanisms for resource allocation, and (3) support failure management for critical situations. These objectives are sustained by a motivating example. Furthermore, the paper introduces several implementation hints based on our current prototype.
{"title":"Adaptive Resource Management in the Cloud: The CORT (Cloud Open Resource Trading) Case Study","authors":"C. Raibulet, Andrea Zaccara","doi":"10.1109/ICAC.2015.55","DOIUrl":"https://doi.org/10.1109/ICAC.2015.55","url":null,"abstract":"Essentially, cloud computing is based on a scenario where specialized providers offer a huge amount of resources to stakeholders, which require them based on their actual needs. The stakeholders needs may be characterized by significant fluctuations due to the model of payment, which is based on the pay-per-use paradigm. This has an important implication: cloud providers have to make an initial expensive investment in resources. Nowadays, available cloud computing solutions are provided by known players on the IT market such as Google, IBM, Amazon, or Microsoft. Small or medium size players have difficulties to compete in this domain. In this paper, we introduce our vision for cloud computing which aims to (1) exploit adaptive mechanisms for resource management and optimize resource usage by discouraging idle resources, (2) provide a market exchange model exploiting trading mechanisms for resource allocation, and (3) support failure management for critical situations. These objectives are sustained by a motivating example. Furthermore, the paper introduces several implementation hints based on our current prototype.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"16 1","pages":"343-348"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88208174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kazuo Hashimoto, Keiji Yamada, K. Tabata, Michio Oda, T. Suganuma, A. Biswas, P. Vlacheas, V. Stavroulaki, Dimitris Kelaidonis, A. Georgakopoulos
Intelligent Knowledge as a Service (iKaaS) is an ambitious project aiming at integrating sensor management using Internet of Things (IoT) and cloud services by employing sensor data. The platform design covers self-healing functions based on self-awareness as well as basic functions such as inter-cloud, security/privacy management, and devices and data management. From the viewpoint of application development, ontology sharing is the most important to integrate services. This paper, the first step towards ontology sharing, defines the iKaaS data model as one that integrates data models used in all applications in the project. The data defined in the iKaaS data model is converted into RDF format and stored in the RDF database. The reasoning mechanism in semantic web allows the semantic integration of data and applications. The iKaaS project is developing a prototype community service, town management and healthcare, in Tagonishi's Smart City. Presenting the iKaaS data model for these said services, this paper emphasizes the necessity of higher contextual awareness to achieve the goal of a better-fitted personalization for the individual.
{"title":"iKaaS Data Modeling: A Data Model for Community Services and Environment Monitoring in Smart City","authors":"Kazuo Hashimoto, Keiji Yamada, K. Tabata, Michio Oda, T. Suganuma, A. Biswas, P. Vlacheas, V. Stavroulaki, Dimitris Kelaidonis, A. Georgakopoulos","doi":"10.1109/ICAC.2015.64","DOIUrl":"https://doi.org/10.1109/ICAC.2015.64","url":null,"abstract":"Intelligent Knowledge as a Service (iKaaS) is an ambitious project aiming at integrating sensor management using Internet of Things (IoT) and cloud services by employing sensor data. The platform design covers self-healing functions based on self-awareness as well as basic functions such as inter-cloud, security/privacy management, and devices and data management. From the viewpoint of application development, ontology sharing is the most important to integrate services. This paper, the first step towards ontology sharing, defines the iKaaS data model as one that integrates data models used in all applications in the project. The data defined in the iKaaS data model is converted into RDF format and stored in the RDF database. The reasoning mechanism in semantic web allows the semantic integration of data and applications. The iKaaS project is developing a prototype community service, town management and healthcare, in Tagonishi's Smart City. Presenting the iKaaS data model for these said services, this paper emphasizes the necessity of higher contextual awareness to achieve the goal of a better-fitted personalization for the individual.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"7 1","pages":"301-306"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73446086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sajib Kundu, R. Rangaswami, Ming Zhao, Ajay Gulati, K. Dutta
The increasing VM density in cloud hosting services makes careful management of physical resources such as CPU, memory, and I/O bandwidth within individual virtualized servers a priority. To maximize cost-efficiency, resource management needs to be coupled with the revenue generating mechanisms of cloud hosting: the service level agreements (SLAs) of hosted client applications. In this paper, we develop a server resource management framework that reduces data center resource management complexity substantially. Our solution implements revenue-driven dynamic resource allocation which continuously steers the resource distribution across hosted VMs within a server such as to maximize the SLA-generated revenue from the server. Our experimental evaluation for a VMware ESX hyper visor highlights the importance of both resource isolation and resource sharing across VMs. The empirical data shows a 7%-54% increase in total revenue generated for a mix of 10-25 VMs hosting either similar or diverse workloads when compared to using the currently available resource distribution mechanisms in ESX.
{"title":"Revenue Driven Resource Allocation for Virtualized Data Centers","authors":"Sajib Kundu, R. Rangaswami, Ming Zhao, Ajay Gulati, K. Dutta","doi":"10.1109/ICAC.2015.40","DOIUrl":"https://doi.org/10.1109/ICAC.2015.40","url":null,"abstract":"The increasing VM density in cloud hosting services makes careful management of physical resources such as CPU, memory, and I/O bandwidth within individual virtualized servers a priority. To maximize cost-efficiency, resource management needs to be coupled with the revenue generating mechanisms of cloud hosting: the service level agreements (SLAs) of hosted client applications. In this paper, we develop a server resource management framework that reduces data center resource management complexity substantially. Our solution implements revenue-driven dynamic resource allocation which continuously steers the resource distribution across hosted VMs within a server such as to maximize the SLA-generated revenue from the server. Our experimental evaluation for a VMware ESX hyper visor highlights the importance of both resource isolation and resource sharing across VMs. The empirical data shows a 7%-54% increase in total revenue generated for a mix of 10-25 VMs hosting either similar or diverse workloads when compared to using the currently available resource distribution mechanisms in ESX.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"36 1","pages":"197-206"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80746638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed systems are largely present and deployed in recent applications. Several systems have common basic requirements, which motivates to adapt reusable solutions for each family of systems. In this paper, we focus on distributed systems designed for large-scale applications requiring a high degree of Reliability and Dynamicity (ReDy distributed systems). We propose a basic architecture for this family of systems and a design solution to guarantee the scalability of the system, the fault tolerance, and a highly dynamic membership management. The studied systems range from hybrid architecture, on which we combine centralized and decentralized solutions.
{"title":"Designing ReDy Distributed Systems","authors":"K. Hafdi, A. Kriouile","doi":"10.1109/ICAC.2015.63","DOIUrl":"https://doi.org/10.1109/ICAC.2015.63","url":null,"abstract":"Distributed systems are largely present and deployed in recent applications. Several systems have common basic requirements, which motivates to adapt reusable solutions for each family of systems. In this paper, we focus on distributed systems designed for large-scale applications requiring a high degree of Reliability and Dynamicity (ReDy distributed systems). We propose a basic architecture for this family of systems and a design solution to guarantee the scalability of the system, the fault tolerance, and a highly dynamic membership management. The studied systems range from hybrid architecture, on which we combine centralized and decentralized solutions.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"90 1","pages":"331-336"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85652188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}