Kubernetes has revolutionized traditional monolithic Internet of Things (IoT) applications into lightweight, decentralized, and independent microservices, thus becoming the de facto standard in the realm of container orchestration. Intelligent and efficient container placement in Mobile Edge Computing (MEC) is challenging subjected to user mobility, and surplus but heterogeneous computing resources. One solution to constantly altering user location is to relocate containers closer to the user; however, this leads to additional underutilized active nodes and increases migration’s computational overhead. On the contrary, few to no migrations are attributed to higher latency, thus degrading the Quality of Service (QoS). To tackle these challenges, we created a framework named EdgeBus1, which enables the co-simulation of container resource management in heterogeneous MEC environments based on Kubernetes. It enables the assessment of the impact of container migrations on resource management, energy, and latency. Further, we propose a mobility and migration cost-aware (MANGO) lightweight scheduler for efficient container management by incorporating migration cost, CPU cores, and memory usage for container scheduling. For user mobility, the Cabspotting dataset is employed, which contains real-world traces of taxi mobility in San Francisco. In the EdgeBus framework, we have created a simulated environment aided with a real-world testbed using Google Kubernetes Engine (GKE) to measure the performance of the MANGO scheduler in comparison to baseline schedulers such as IMPALA-based MobileKube, Latency Greedy, and Binpacking. Finally, extensive experiments have been conducted, which demonstrate the effectiveness of the MANGO in terms of latency and number of migrations.
Medical Body Area Networks (MBANs), a specialized subset of Wireless Body Area Networks (WBANs), are crucial for enabling medical data collection, processing, and transmission. The IEEE 802.15.6 standard governs these networks but falls short in practical MBAN scenarios. This paper introduces ASAP, a Lightweight Authenticated Secure Association Protocol integrated with IEEE 802.15.6. ASAP prioritizes patient privacy with randomized node ID generation and temporary shared keys, preventing node tracking and privacy violations. It optimizes network performance by consolidating Master Keys (MK), Pairwise Temporal Keys (PTK), and Group Temporal Keys (GTK) creation into a unified process, ensuring the efficiency of the standard four-message association protocol. ASAP enhances security by eliminating the need for pre-shared keys, reducing the attack surface, and improving forward secrecy. The protocol achieves mutual authentication without pre-shared keys or passwords and supports advanced cryptographic algorithms on nodes with limited processing capabilities. Additionally, it imposes connection initiation restrictions, requiring valid certificates for nodes, thereby addressing gaps in IEEE 802.15.6. Formal verification using Verifpal confirms ASAP's resilience against various attacks. Implementation results show ASAP's superiority over standard IEEE 802.15.6 protocols, establishing it as a robust solution for securing MBAN communications in medical environments.
In traditional IoT applications, energy saving is essential while high bandwidth is not always required. However, a new wave of IoT applications exhibit stricter requirements in terms of bandwidth and latency. Broadband technologies like Wi-Fi could meet such requirements. Nevertheless, these technologies come with limitations: high energy consumption and limited coverage range. In order to address these two shortcomings, and based on the recent IEEE 802.11ba amendment, we propose a Wi-Fi-based mesh architecture where devices are outfitted with a supplementary Wake-up Radio (WuR) interface. According to our analytical and simulation studies, this design maintains latency figures comparable to conventional single-interface networks while significantly reducing energy consumption (by up to almost two orders of magnitude). Additionally, we verify via real device measurements that battery lifetime can be increased by as much as 500% with our approach.
Training a deep learning model generally requires a huge amount of memory and processing power. Once trained, the learned model can make predictions very fast with very little resource consumption. The learned weights can be fitted into a microcontroller to build affordable embedded intelligence systems which is also known as TinyML. Although few attempts have been made, the limits of the state-of-the-art training of a deep learning model within a microcontroller can be pushed further. Generally deep learning models are trained with gradient optimizers which predict with high accuracy but require a very high amount of resources. On the other hand, nature-inspired meta-heuristic optimizers can be used to build a fast approximation of the model’s optimal solution with low resources. After a rigorous test, we have found that Grey Wolf Optimizer can be modified for enhanced uses of main memory, paging and swap space among and wolves. This modification saved up to 71% memory requirements compared to gradient optimizers. We have used this modification to train the TinyML model within a microcontroller of 256KB RAM. The performances of the proposed framework have been meticulously benchmarked on 13 open-sourced datasets.