The metaverse aims to provide immersive virtual worlds connecting with the physical world. To enable real-time interpersonal communications between users across the globe, the metaverse places high demands on network performance, including low latency, high bandwidth, and fast network speeds. This paper proposes a novel Media Convergence Metaverse Network (MCMN) framework to address these challenges. Specifically, the META controller serves as MCMN's logically centralized control plane, responsible for holistic orchestration across edge sites and end-to-end path computation between metaverse users. We develop a model-free deep reinforcement learning-based metaverse traffic optimization algorithm that learns to route flows while satisfying the Quality of Service (QoS) boundaries. The network slicing engine leverages artificial intelligence and machine learning to create isolated, customized virtual networks tailored for metaverse traffic dynamics on demand. It employs unsupervised and reinforcement learning techniques using network telemetry from the META controller to understand application traffic patterns and train cognitive slicer agents to make quality of service -aware decisions accordingly. Optimized delivery of diverse concurrent media types necessitates routing intelligence to meet distinct requirements while mitigating clashes over a shared infrastructure. Media-aware routing enhances traditional shortest-path approaches by combining topological metrics with workflow sensitivities. We realize an edge-assisted rendering fabric to offload complex processing from bandwidth-constrained endpoints while retaining visual realism. Extensive simulations demonstrate MCMN's superior performance compared to conventional networking paradigms. MCMN shows great promise to enable seamless interconnectivity and ultra-high fidelity communications to unlock the true potential of the metaverse.
We propose a novel framework for achieving precision landing in drone services. The proposed framework consists of two distinct decoupled modules, each designed to address a specific aspect of landing accuracy. The first module is concerned with intrinsic errors, where new error models are introduced. This includes a spherical error model that takes into account the orientation of the drone. Additionally, we propose a live position correction algorithm that employs the error models to correct for intrinsic errors in real-time. The second module focuses on external wind forces and presents an aerodynamics model with wind generation to simulate the drone’s physical environment. We utilize reinforcement learning to train the drone in simulation with the goal of landing precisely under dynamic wind conditions. Experimental results, conducted through simulations and validated in the physical world, demonstrate that our proposed framework significantly increases the landing accuracy while maintaining a low onboard computational cost.
Security Operations Centres (SOCs) play a pivotal role in defending organisations against evolving cyber threats. They function as central hubs for detecting, analysing, and responding promptly to cyber incidents with the primary objective of ensuring the confidentiality, integrity, and availability of digital assets. However, they struggle against the growing problem of alert fatigue, where the sheer volume of alerts overwhelms SOC analysts and raises the risk of overlooking critical threats. In recent times, there has been a growing call for human-AI teaming, wherein humans and AI collaborate with each other, leveraging their complementary strengths and compensating for their weaknesses. The rapid advances in AI and the growing integration of AI-enabled tools and technologies within SOCs give rise to a compelling argument for the implementation of human-AI teaming within the SOC environment. Therefore, in this position paper, we present our vision for human-AI teaming to address the problem of alert fatigue in SOC. We propose the (mathcal {A}^2mathcal {C} ) Framework, which enables flexible and dynamic decision-making by allowing seamless transitions between automated, augmented, and collaborative modes of operation. Our framework allows AI-powered automation for routine alerts, AI-driven augmentation for expedited expert decision-making, and collaborative exploration for tackling complex, novel threats. By implementing and operationalising (mathcal {A}^2mathcal {C} ), SOCs can significantly reduce alert fatigue while empowering analysts to efficiently and effectively respond to security incidents.
With the rapid advancement of the Internet of Things (IoT) and 5G networks in smart cities, the inevitable generation of massive amounts of data, commonly known as big data, has introduced increased latency within the traditional cloud computing paradigm. In response to this challenge, Mobile Edge Computing (MEC) has emerged as a viable solution, offloading a portion of mobile device workloads to nearby edge servers equipped with ample computational resources. Despite significant research in MEC systems, optimizing the placement of edge servers in smart cities to enhance network performance has received little attention. In this paper, we propose RESP, a novel Recursive clustering technique for Edge Server Placement in MEC environments. RESP operates based on the median of each cluster determined by the number of Base Transceiver Stations (BTSs), strategically placing edge servers to achieve workload balance and minimize network traffic between them. Our proposed clustering approach substantially improves load balancing compared to existing methods and demonstrates superior performance in handling traffic dynamics. Through experimental evaluation with real-world data from Shanghai Telecom’s base station dataset, our approach outperforms several representative techniques in terms of workload balancing and network traffic optimization. By addressing the ESP problem and introducing an advanced recursive clustering technique, this work makes a substantial contribution to optimizing mobile edge computing networks in smart cities. The proposed algorithm outperforms alternative methodologies, demonstrating a 10% average improvement in optimizing network traffic. Moreover, it achieves a 53% more suitable result in terms of computational load.
The Internet of Things (IoT) refers to a complex network comprising interconnected devices that transmit their data via the Internet. Due to their open environment, limited computation power, and absence of built-in security, IoT environments are susceptible to various cyberattacks. Denial of service (DDoS) attacks are among the most destructive types of threats. The Multi-vector DDoS attack is a contemporary and formidable form of DDoS wherein the attacker employs a collection of compromised IoT devices as zombies to initiate numerous DDoS attacks against a target server. A Blockchain-based Operational Threat Intelligence framework, OTI-IoT, is proposed in this paper to counter multi-vector DDoS attacks in IoT networks. A ”Prevent-then-Detect” methodology was utilized to deploy the OTI-IoT framework in two distinct stages. During Phase 1, the consortium Blockchain network validators employ the IPS module, composed of a smart contract for attack prevention & access control, and Proof of Voting consensus, to thwart attacks. Validators are outfitted with deep learning-based IDS instances to detect multi-vector DDoS attacks during Phase 2. Alert messages are generated by the IDS module’s alert generation & propagation smart contract in response to identifying malicious IoT sources. The feedback loop from the IDS module to the IPS module prevents incoming traffic from malicious sources. The proposed OTI framework capabilities are realized as an outcome of combining and storing the outcomes of the IDS and IPS modules on the consortium Blockchain. Each validator maintains a shared ledger containing information regarding threat sources to ensure robust security, transparency, and integrity. The operational execution of OTI-IoT occurs on an individual Ethereum Blockchain. The empirical findings indicate that our proposed framework is most suitable for real-time applications due to its ability to lower attack detection time, decreased block validation time, and higher attack prevention rate.
To gain a comprehensive understanding of a patient’s health, advanced analytics must be applied to the data collected by electronic health record (EHR) systems. However, managing and curating this data requires carefully designed workflows. While digitalization and standardization enable continuous health monitoring, missing data values and technical issues can compromise the consistency and timeliness of the data. In this paper, we propose a workflow for developing prognostic models that leverages the SMART BEAR infrastructure and the capabilities of the Big Data Analytics (BDA) engine to homogenize and harmonize data points. Our workflow improves the quality of the data by evaluating different imputation algorithms and selecting one that maintains the distribution and correlation of features similar to the raw data. We applied this workflow to a subset of the data stored in the SMART BEAR repository and examined its impact on the prediction of emerging health states such as cardiovascular disease and mild depression. We also discussed the possibility of model validation by clinicians in the SMART BEAR project, the transmission of subsequent actions in the decision support system, and the estimation of the required number of data points.
Vertical federated learning (VFL) revolutionizes privacy-preserved collaboration for small businesses, that have distinct but complementary feature sets. However, as the scope of VFL expands, the constant entering and leaving of participants, as well as the subsequent exercise of the “right to be forgotten” pose a great challenge in practice. The question of how to efficiently erase one’s contribution from the shared model remains largely unexplored in the context of vertical federated learning. In this paper, we introduce a vertical federated unlearning framework, which integrates model checkpointing techniques with a hybrid, first-order optimization technique. The core concept is to reduce backpropagation time and improve convergence/generalization by combining the advantages of the existing optimizers. We provide in-depth theoretical analysis and time complexity to illustrate the effectiveness of the proposed design. We conduct extensive experiments on 6 public datasets and demonstrate that our method could achieve up to 6.3 × speed-up compared to the baseline, with negligible influence on the original learning task.