Although various smart city projects are launched in all over the world, it is not obvious how to tailor the existing IoT and self-aware technologies for individual services, systematically. One of the reason is due to the lack of common view that can be used to investigate various smart city services across different domains. This paper proposes a domain-neutral execution model and an integrated life-cycle model of smart city services. We first identify essential activities for smart city services based on the city-as-a-state-machine concept. We then adopt goal-oriented thinking which clearly decomposes a goal and a means for each of the essential activities. By doing so, the proposed models can grasp essentials of any smart city service with domain-neutral activities and life cycles, while domain-specific parts can be varied by the means. Using the proposed models, we conduct a case study with smart car parking, where the proposed method compares the four different parking services. Finally, we develop ideas where and how the IoT and self-aware technologies can be applied effectively.
{"title":"Constructing Execution and Life-Cycle Models for Smart City Services with Self-Aware IoT","authors":"Masahide Nakamura, L. D. Bousquet","doi":"10.1109/ICAC.2015.57","DOIUrl":"https://doi.org/10.1109/ICAC.2015.57","url":null,"abstract":"Although various smart city projects are launched in all over the world, it is not obvious how to tailor the existing IoT and self-aware technologies for individual services, systematically. One of the reason is due to the lack of common view that can be used to investigate various smart city services across different domains. This paper proposes a domain-neutral execution model and an integrated life-cycle model of smart city services. We first identify essential activities for smart city services based on the city-as-a-state-machine concept. We then adopt goal-oriented thinking which clearly decomposes a goal and a means for each of the essential activities. By doing so, the proposed models can grasp essentials of any smart city service with domain-neutral activities and life cycles, while domain-specific parts can be varied by the means. Using the proposed models, we conduct a case study with smart car parking, where the proposed method compares the four different parking services. Finally, we develop ideas where and how the IoT and self-aware technologies can be applied effectively.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"69 1","pages":"289-294"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90885996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Xia, Maurício O. Tsugawa, J. Fortes, Shigang Chen
In this paper, we introduce a pack-centric approach to data center resource management by abstracting a system as a pack of resources and considering the mapping of these packs onto physical data center resource groups, called swads. The assignment of packs/VMs to swads/PMs is formulated as an integer optimization problem that can capture constraints related to the available resources, data center efficiency and customers' complex requirements. Scalability is achieved through a hierarchical decomposition method. We illustrate aspects of the proposed approach by describing and experimenting with a concrete and challenging resource allocation problem.
{"title":"Toward Hierarchical Mixed Integer Programming for Pack-to-Swad Placement in Datacenters","authors":"Ye Xia, Maurício O. Tsugawa, J. Fortes, Shigang Chen","doi":"10.1109/ICAC.2015.23","DOIUrl":"https://doi.org/10.1109/ICAC.2015.23","url":null,"abstract":"In this paper, we introduce a pack-centric approach to data center resource management by abstracting a system as a pack of resources and considering the mapping of these packs onto physical data center resource groups, called swads. The assignment of packs/VMs to swads/PMs is formulated as an integer optimization problem that can capture constraints related to the available resources, data center efficiency and customers' complex requirements. Scalability is achieved through a hierarchical decomposition method. We illustrate aspects of the proposed approach by describing and experimenting with a concrete and challenging resource allocation problem.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"24 1","pages":"219-222"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74312599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a framework based on constraint satisfaction that adds self-integration capabilities to component-based embedded systems by identifying correct compositions of the desired components and their dependencies. This not only allows autonomous integration of additional functionality but can also be extended to ensure that the new configuration does not violate any extra-functional requirements, such as safety or security, imposed by the application domain.
{"title":"An Extensible Autonomous Reconfiguration Framework for Complex Component-Based Embedded Systems","authors":"Johannes Schlatow, Mischa Moestl, R. Ernst","doi":"10.1109/ICAC.2015.18","DOIUrl":"https://doi.org/10.1109/ICAC.2015.18","url":null,"abstract":"We present a framework based on constraint satisfaction that adds self-integration capabilities to component-based embedded systems by identifying correct compositions of the desired components and their dependencies. This not only allows autonomous integration of additional functionality but can also be extended to ensure that the new configuration does not violate any extra-functional requirements, such as safety or security, imposed by the application domain.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"125 38","pages":"239-242"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91509060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We demonstrate experimentally how an Autonomic Network based on the CPN protocol can provide the Quality of Service (QoS) required by voice communications. The implementation uses Reinforcement Learning to dynamically seek paths that meet the quality requirements of voice communications. Measurements of packet delay, jitter, and loss illustrate the performance obtained from the system.
{"title":"Demonstrating Voice over an Autonomic Network","authors":"Lan Wang, E. Gelenbe","doi":"10.1109/ICAC.2015.14","DOIUrl":"https://doi.org/10.1109/ICAC.2015.14","url":null,"abstract":"We demonstrate experimentally how an Autonomic Network based on the CPN protocol can provide the Quality of Service (QoS) required by voice communications. The implementation uses Reinforcement Learning to dynamically seek paths that meet the quality requirements of voice communications. Measurements of packet delay, jitter, and loss illustrate the performance obtained from the system.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"27 1","pages":"139-140"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82546969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Stephen, D. Gmach, Rob Block, A. Madan, Alvin AuYoung
Security Information and Event Management (SIEM) systems perform complex event processing over a large number of event streams at high rate. As event streams increase in volume and event processing becomes more complex, traditional approaches such as scaling up to more powerful systems quickly become ineffective. This paper describes the design and implementation of DRES, a distributed, rule-based event evaluation system that can easily scale to process a large volume of non-trivial events. DRES intelligently forwards events across a cluster of nodes to evaluate complex correlation and aggregation rules. This approach enables DRES to work with any rules engine implementation. Our evaluation shows DRES scales linearly to more than 16 nodes. At this size it successfully processed more than half a million events per second.
{"title":"Distributed Real-Time Event Analysis","authors":"J. Stephen, D. Gmach, Rob Block, A. Madan, Alvin AuYoung","doi":"10.1109/ICAC.2015.12","DOIUrl":"https://doi.org/10.1109/ICAC.2015.12","url":null,"abstract":"Security Information and Event Management (SIEM) systems perform complex event processing over a large number of event streams at high rate. As event streams increase in volume and event processing becomes more complex, traditional approaches such as scaling up to more powerful systems quickly become ineffective. This paper describes the design and implementation of DRES, a distributed, rule-based event evaluation system that can easily scale to process a large volume of non-trivial events. DRES intelligently forwards events across a cluster of nodes to evaluate complex correlation and aggregation rules. This approach enables DRES to work with any rules engine implementation. Our evaluation shows DRES scales linearly to more than 16 nodes. At this size it successfully processed more than half a million events per second.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"1 1","pages":"11-20"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82043923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, there is a trend to integrate trusted computing concepts into autonomic systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. In this work, we propose an architecture in which embedded autonomic systems rely on the properties of TEE to guarantee both their self-protection and self-healing.
{"title":"Towards Integrating Trusted Execution Environment into Embedded Autonomic Systems","authors":"M. Sabt, Mohammed Achemlal, A. Bouabdallah","doi":"10.1109/ICAC.2015.27","DOIUrl":"https://doi.org/10.1109/ICAC.2015.27","url":null,"abstract":"Nowadays, there is a trend to integrate trusted computing concepts into autonomic systems. In this context, the Trusted Execution Environment (TEE) was designed to enrich the previously defined trusted platforms. TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the system. In this work, we propose an architecture in which embedded autonomic systems rely on the properties of TEE to guarantee both their self-protection and self-healing.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"47 1","pages":"165-166"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77150659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autonomic systems appear as closed because their internal logic cannot be communicated to their users. This work presents a method to solve this communication problem. The objective is to let the user specify, negotiate and observe the high-level objectives imposed to the autonomic system without affecting the latter's self-management capabilities, but also to enable relevant communication from the system towards the user. A smart micro-grid is the autonomic system used. A procedure that generates relevant arguments is connected at the monitoring level of the grid as a deliberative layer.
{"title":"Adding a Deliberative Layer to an Autonomic System","authors":"Marius Pol","doi":"10.1109/ICAC.2015.32","DOIUrl":"https://doi.org/10.1109/ICAC.2015.32","url":null,"abstract":"Autonomic systems appear as closed because their internal logic cannot be communicated to their users. This work presents a method to solve this communication problem. The objective is to let the user specify, negotiate and observe the high-level objectives imposed to the autonomic system without affecting the latter's self-management capabilities, but also to enable relevant communication from the system towards the user. A smart micro-grid is the autonomic system used. A procedure that generates relevant arguments is connected at the monitoring level of the grid as a deliberative layer.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"113 1","pages":"143-144"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74833169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a transport mechanism using replication for predictability to achieve low FCT for short flows. For each short TCP flow, we replicate it and send the identical packets for both flows by creating two connections to the receiver. The application uses the first flow that finishes the transfer. We observe that the congestion levels of different paths in data center networks are statistically independent. The original flow and replicated flow are highly likely to traverse different paths, reducing the probability of queuing delay. We implement flow replication in Apache Thrift transport layer. Apache Thrift is a RPC framework that supports multiple languages, especially Java. It can be used as a middleware at the application layer that means these is no need to modify the switches and operating systems. We conduct the experiments on our private cloud and Amazon EC2 data center. The latest EC2 data center is known to have multiple equal cost paths between two virtual machines. Our experiment results show that replication for predictability can reduce the Flow Completion Time of short TCP flows over 20%. When integrated with Cassandra, we can also improve the performance of Read operation with flow replication.
{"title":"Replication for Predictability in a Java RPC Framework","authors":"Jianwei Tu, Christopher Stewart","doi":"10.1109/ICAC.2015.49","DOIUrl":"https://doi.org/10.1109/ICAC.2015.49","url":null,"abstract":"We propose a transport mechanism using replication for predictability to achieve low FCT for short flows. For each short TCP flow, we replicate it and send the identical packets for both flows by creating two connections to the receiver. The application uses the first flow that finishes the transfer. We observe that the congestion levels of different paths in data center networks are statistically independent. The original flow and replicated flow are highly likely to traverse different paths, reducing the probability of queuing delay. We implement flow replication in Apache Thrift transport layer. Apache Thrift is a RPC framework that supports multiple languages, especially Java. It can be used as a middleware at the application layer that means these is no need to modify the switches and operating systems. We conduct the experiments on our private cloud and Amazon EC2 data center. The latest EC2 data center is known to have multiple equal cost paths between two virtual machines. Our experiment results show that replication for predictability can reduce the Flow Completion Time of short TCP flows over 20%. When integrated with Cassandra, we can also improve the performance of Read operation with flow replication.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"97 1","pages":"163-164"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80804212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Frömmgen, R. Rehner, Max Lehn, A. Buchmann
The development of adaptive distributed systems is complex. Due to a large amount of interdependencies and feedback loops between network nodes and software components, distributed systems respond nonlinearly to changes in the environment and system adaptations. Although Event Condition Action (ECA) rules allow a crisp definition of the adaptive behavior and a loose coupling with the actual system implementation, defining concrete rules is nontrivial. It requires specifying the events and conditions which trigger adaptations, as well as the selection of appropriate actions leading to suitable new configurations. In this paper, we present the idea of Fossa, an ECA framework for adaptive distributed systems. Following a methodology that separates the adaptation logic from the actual application implementation, we propose learning ECA rules by automatically executing a multitude of tests. Rule sets are generated by algorithms such as genetic programming, and the results are evaluated using a utility function provided by the developer. Fossa therefore provides an automated offline learner that derives suitable ECA rules for a given utility function.
{"title":"Fossa: Learning ECA Rules for Adaptive Distributed Systems","authors":"Alexander Frömmgen, R. Rehner, Max Lehn, A. Buchmann","doi":"10.1109/ICAC.2015.37","DOIUrl":"https://doi.org/10.1109/ICAC.2015.37","url":null,"abstract":"The development of adaptive distributed systems is complex. Due to a large amount of interdependencies and feedback loops between network nodes and software components, distributed systems respond nonlinearly to changes in the environment and system adaptations. Although Event Condition Action (ECA) rules allow a crisp definition of the adaptive behavior and a loose coupling with the actual system implementation, defining concrete rules is nontrivial. It requires specifying the events and conditions which trigger adaptations, as well as the selection of appropriate actions leading to suitable new configurations. In this paper, we present the idea of Fossa, an ECA framework for adaptive distributed systems. Following a methodology that separates the adaptation logic from the actual application implementation, we propose learning ECA rules by automatically executing a multitude of tests. Rule sets are generated by algorithms such as genetic programming, and the results are evaluated using a utility function provided by the developer. Fossa therefore provides an automated offline learner that derives suitable ECA rules for a given utility function.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"10 1","pages":"207-210"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72795232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yijun Ying, R. Birke, Cheng Wang, L. Chen, N. Gautam
To strike a balance between optimizing for energy versus performance in data centers is extremely tricky because the workloads are significantly different with varying constraints on performance. This issue is exacerbated with the introduction of MapReduce over and above conventional web applications. In particular, with batch versus interactive MapReduce, e.g., Spark system, data availability and locality drive performance while exhibiting different degrees of delay sensitivities. In this paper we consider an energy minimization framework (which is formulated as a concave minimization problem) with explicit modeling of (i) time variability, (ii) data locality, and (iii) delay sensitivity of web applications, batch MapReduce, and interactive MapReduce. Our objective is to maximize the usage of MapReduce servers by delaying the batch MapReduce and offering the execution to web workloads whenever capacity permits. We propose a two-step approach which first employs a controller dynamically allocating servers to the three types of workloads and secondly designs a MapReduce scheduler achieving the optimal data locality. To cater to the stochastic nature of workloads, we use a Makov Decision Process model to design the allocation algorithm at the controller and derive the structure of the optimal. The proposed locality-aware scheduler is specifically engineered to sustain the throughput during the transient overload caused by insufficient server allocation for the batch-MapReduce. We conclude by presenting simulation results from an extensive set of experiments, and these results indicate the efficacy of the methodology proposed by keeping the data center costs to a minimum while ensuring the delay constraints of workloads are met.
{"title":"Optimizing Energy, Locality and Priority in a MapReduce Cluster","authors":"Yijun Ying, R. Birke, Cheng Wang, L. Chen, N. Gautam","doi":"10.1109/ICAC.2015.30","DOIUrl":"https://doi.org/10.1109/ICAC.2015.30","url":null,"abstract":"To strike a balance between optimizing for energy versus performance in data centers is extremely tricky because the workloads are significantly different with varying constraints on performance. This issue is exacerbated with the introduction of MapReduce over and above conventional web applications. In particular, with batch versus interactive MapReduce, e.g., Spark system, data availability and locality drive performance while exhibiting different degrees of delay sensitivities. In this paper we consider an energy minimization framework (which is formulated as a concave minimization problem) with explicit modeling of (i) time variability, (ii) data locality, and (iii) delay sensitivity of web applications, batch MapReduce, and interactive MapReduce. Our objective is to maximize the usage of MapReduce servers by delaying the batch MapReduce and offering the execution to web workloads whenever capacity permits. We propose a two-step approach which first employs a controller dynamically allocating servers to the three types of workloads and secondly designs a MapReduce scheduler achieving the optimal data locality. To cater to the stochastic nature of workloads, we use a Makov Decision Process model to design the allocation algorithm at the controller and derive the structure of the optimal. The proposed locality-aware scheduler is specifically engineered to sustain the throughput during the transient overload caused by insufficient server allocation for the batch-MapReduce. We conclude by presenting simulation results from an extensive set of experiments, and these results indicate the efficacy of the methodology proposed by keeping the data center costs to a minimum while ensuring the delay constraints of workloads are met.","PeriodicalId":6643,"journal":{"name":"2015 IEEE International Conference on Autonomic Computing","volume":"1 1","pages":"21-30"},"PeriodicalIF":0.0,"publicationDate":"2015-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87494630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}