In this article, we propose and implement a distributed autonomic manager that maintains service level agreements (SLA) for each application scenario. The proposed autonomic manager supports SLAs by configuring the bandwidth ratios for each application scenario and uses an overlay network as an infrastructure. The most important aspect of the proposed autonomic manager is its scalability which allows us to deal with geographically distributed cloud-based applications and a large volume of computation. This can be useful in look ahead optimization and in adaptations using complex models, such as machine learning. We formally prove the safety and liveness properties of the implemented distributed algorithms. Through experiments on the Amazon AWS cloud, using two different use cases, we demonstrate the elasticity and flexibility of the autonomic manager as a measure of its applicability to different cloud applications with different types of workloads. Experiments also demonstrate that increasing the size of a look ahead window, up to a certain size, improves the accuracy of the adaptation decisions by up to 50%.
Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator on the loop—i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator’s capability to intervene and, if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgment. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, then what kind of explanation should be provided to the operator. In this work, we define a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, we characterize explanations in terms of explanation content, effect, and cost. We then present a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used to improve overall system utility. We evaluate our explanation framework in the context of a realistic industrial control system with adaptive behaviors.
In today’s world, circumstances, processes, and requirements for software systems are becoming increasingly complex. In order to operate properly in such dynamic environments, software systems must adapt to these changes, which has led to the research area of Self-Adaptive Systems (SAS). Platooning is one example of adaptive systems in Intelligent Transportation Systems, which is the ability of vehicles to travel with close inter-vehicle distances. This technology leads to an increase in road throughput and safety, which directly addresses the increased infrastructure needs due to increased traffic on the roads. However, the No-Free-Lunch theorem states that the performance of one adaptation planning strategy is not necessarily transferable to other problems. Moreover, especially in the field of SAS, the selection of the most appropriate strategy depends on the current situation of the system. In this paper, we address the problem of self-aware optimization of adaptation planning strategies by designing a framework that includes situation detection, strategy selection, and parameter optimization of the selected strategies. We apply our approach on the case study platooning coordination and evaluate the performance of the proposed framework.
Multi-agent systems provide a basis for developing systems of autonomous entities and thus find application in a variety of domains. We consider a setting where not only the member agents are adaptive but also the multi-agent system viewed as an entity in its own right is adaptive. Specifically, the social structure of a multi-agent system can be reflected in the social norms among its members. It is well recognized that the norms that arise in society are not always beneficial to its members. We focus on prosocial norms, which help achieve positive outcomes for society and often provide guidance to agents to act in a manner that takes into account the welfare of others.
Specifically, we propose Cha, a framework for the emergence of prosocial norms. Unlike previous norm emergence approaches, Cha supports continual change to a system (agents may enter and leave) and dynamism (norms may change when the environment changes). Importantly, Cha agents incorporate prosocial decision-making based on inequity aversion theory, reflecting an intuition of guilt arising from being antisocial. In this manner, Cha brings together two important themes in prosociality: decision-making by individuals and fairness of system-level outcomes. We demonstrate via simulation that Cha can improve aggregate societal gains and fairness of outcomes.
We focus on the online multi-object k-coverage problem (OMOkC), where mobile robots are required to sense a mobile target from k diverse points of view, coordinating themselves in a scalable and possibly decentralised way. There is active research on OMOkC, particularly in the design of decentralised algorithms for solving it. We propose a new take on the issue: Rather than classically developing new algorithms, we apply a macro-level paradigm, called aggregate computing, specifically designed to directly program the global behaviour of a whole ensemble of devices at once. To understand the potential of the application of aggregate computing to OMOkC, we extend the Alchemist simulator (supporting aggregate computing natively) with a novel toolchain component supporting the simulation of mobile robots. This way, we build a software engineering toolchain comprising language and simulation tooling for addressing OMOkC. Finally, we exercise our approach and related toolchain by introducing new algorithms for OMOkC; we show that they can be expressed concisely, reuse existing software components and perform better than the current state-of-the-art in terms of coverage over time and number of objects covered overall.