The rapid evolution of backdoor attacks has emerged as a significant threat to the security of autonomous driving models. An attacker injects a backdoor into the model by adding triggers to the samples, which can be activated to manipulate the model’s inference. Backdoor attacks can lead to severe consequences, such as misidentifying traffic signs during autonomous driving, posing a risk of causing traffic accidents. Recently, there has been a gradual evolution of frequency-domain backdoor attacks. However, since the change of both amplitude and its corresponding phase will significantly affect image appearance, most of the existing frequency-domain backdoor attacks change only the amplitude, which results in a suboptimal efficacy of the attack. In this work, we propose an attack called IBAQ, to solve this problem by blurring semantic information of the trigger image through the quadratic phase. Initially, we convert the trigger and benign sample to YCrCb space. Then, we perform the fast Fourier transform on the Y channel, blending the trigger image’s amplitude and quadratic phase linearly with the benign sample’s amplitude and phase. IBAQ achieves covert injection of trigger information within amplitude and phase, enhancing the attack effect. We validate the effectiveness and stealthiness of IBAQ through comprehensive experiments.
The Intelligent Transportation System (ITS) serves as a pivotal element within urban networks, offering decision support to users and connected automated vehicles (CAVs) through comprehensive information gathering, sensing, device control, and data processing. Presently, ITS predominantly relies on sensors embedded in fixed infrastructure, notably Roadside Units (RSUs). However, RSUs are confined by coverage limitations and may encounter challenges in prompt emergency responses. On-demand resources, such as drones, present a viable option to supplement these deficiencies effectively. This paper introduces an approach where Software-Defined Networking (SDN) and Mobile Edge Computing (MEC) technologies are integrated to formulate a high-availability drone swarm control and communication infrastructure framework, comprising the cloud layer, edge layer, and device layer. Drones confront limitations in flight duration attributed to battery limitations, posing a challenge in sustaining continuous monitoring of road conditions over extended periods. Effective drone scheduling stands as a promising solution to overcome these constraints. To tackle this issue, we initially utilized Graph WaveNet, a specialized graph neural network structure tailored for spatial-temporal graph modeling, for training a congestion prediction model using real-world dataset inputs. Building upon this, we further propose an algorithm for drone scheduling based on congestion prediction. Our simulation experiments using real-world data demonstrate that, compared to the baseline method, the proposed scheduling algorithm not only yielded superior scheduling gains but also mitigated drone idle rates.
Container security has received much research attention recently. Previous work has proposed to apply various machine learning techniques to detect security attacks in containerized applications. On one hand, supervised machine learning schemes require sufficient labeled training data to achieve good attack detection accuracy. On the other hand, unsupervised machine learning methods are more practical by avoiding training data labeling requirements, but they often suffer from high false alarm rates. In this paper, we present a generic self-supervised hybrid learning (SHIL) framework for achieving efficient online security attack detection in containerized systems. SHIL can effectively combine both unsupervised and supervised learning algorithms but does not require any manual data labeling. We have implemented a prototype of SHIL and conducted experiments over 46 real world security attacks in 29 commonly used server applications. Our experimental results show that SHIL can reduce false alarms by 33-93% compared to existing supervised, unsupervised, or semi-supervised machine learning schemes while achieving a higher or similar detection rate.
We present novel techniques for simultaneous task allocation and planning in multi-robot systems operating under uncertainty. By performing task allocation and planning simultaneously, allocations are informed by individual robot behaviour, creating more efficient team behaviour. We go beyond existing work by planning for task reallocation across the team given a model of partial task satisfaction under potential robot failures and uncertain action outcomes. We model the problem using Markov decision processes, with tasks encoded in co-safe linear temporal logic, and optimise for the expected number of tasks completed by the team. To avoid the inherent complexity of joint models, we propose an alternative model that simultaneously considers task allocation and planning, but in a sequential fashion. We then build a joint policy from the sequential policy obtained from our model, thus allowing for concurrent policy execution. Furthermore, to enable adaptation in the case of robot failures, we consider replanning from failure states and propose an approach to preemptively replan in an anytime fashion, replanning for more probable failure states first. Our method also allows us to quantify the performance of the team by providing an analysis of properties such as the expected number of completed tasks under concurrent policy execution. We implement and extensively evaluate our approach on a range of scenarios. We compare its performance to a state-of-the-art baseline in decoupled task allocation and planning: sequential single-item auctions. Our approach outperforms the baseline in terms of computation time and the number of times replanning is required on robot failure.
Edge Computing places the computational services and resources closer to the user proximity, to reduce latency, and ensure the quality of service and experience. Low latency, context awareness, and mobility support are the major contributors to edge-enabled smart systems. Such systems require handling new situations and change on the fly and ensuring the quality of service while only having access to constrained computation and communication resources and operating in mobile, dynamic, and ever-changing environments. Hence, adaptation and self-organisation are crucial for such systems to maintain their performance, and operability while accommodating new changes in their environment.
This paper reviews the current literature in the field of adaptive Edge Computing systems. We use a widely accepted taxonomy, which describes the important aspects of adaptive behaviour implementation in computing systems. This taxonomy discusses aspects such as adaptation reasons, the various levels an adaptation strategy can be implemented, the time of reaction to a change, categories of adaptation technique, and control of the adaptive behaviour. In this paper, we discuss how these aspects are addressed in the literature, and identify the open research challenges and future direction in adaptive Edge Computing systems.
The results of our analysis show that most of the identified approaches target adaptation at the application level, and only a few focus on middleware, communication infrastructure, and context. Adaptations that are required to address the changes in the context, changes caused by users or in the system itself are also less explored. Furthermore, most of the literature has opted for reactive adaptation, although proactive adaptation is essential to maintain the edge computing systems’ performance and interoperability by anticipating the required adaptations on the fly. Additionally, most approaches apply a centralised adaptation control, which does not perfectly fit the mostly decentralised/distributed Edge Computing settings.
Power capping is an important technique for high-density servers to safely oversubscribe the power infrastructure in a data center. However, power capping is commonly accomplished by dynamically lowering the server processors’ frequency levels, which can result in degraded application performance. For servers that run important machine learning (ML) applications with Service-Level Objective (SLO) requirements, inference performance such as recognition accuracy must be optimized within a certain latency constraint, which demands high server performance. In order to achieve the best inference accuracy under the desired latency and server power constraints, this paper proposes OptimML, a multi-input-multi-output (MIMO) control framework that jointly controls both inference latency and server power consumption, by flexibly adjusting the machine learning model size (and so its required computing resources) when server frequency needs to be lowered for power capping. Our results on a hardware testbed with widely adopted ML framework (including PyTorch, TensorFlow, and MXNet) show that OptimML achieves higher inference accuracy compared with several well-designed baselines, while respecting both latency and power constraints. Furthermore, an adaptive control scheme with online model switching and estimation is designed to achieve analytic assurance of control accuracy and system stability, even in the face of significant workload/hardware variations.
Digitalization enables the automation required to operate modern cyber-physical energy systems (CPESs), leading to a shift from hierarchical to organic systems. However, digitalization increases the number of factors affecting the state of a CPES (e.g., software bugs and cyber threats). In addition to established factors like functional correctness, others like security become relevant but are yet to be integrated into an operational viewpoint, i.e. a holistic perspective on the system state. Trust in organic computing is an approach to gain a holistic view of the state of systems. It consists of several facets (e.g., functional correctness, security, and reliability), which can be used to assess the state of CPES. Therefore, a trust assessment on all levels can contribute to a coherent state assessment. This paper focuses on the trust in ICT-enabled grid services in a CPES. These are essential for operating the CPES, and their performance relies on various data aspects like availability, timeliness, and correctness. This paper proposes to assess the trust in involved components and data to estimate data correctness, which is crucial for grid services. The assessment is presented considering two exemplary grid services, namely state estimation and coordinated voltage control. Furthermore, the interpretation of different trust facets is also discussed.