In the internet of everything (IoE) era, the proliferation of internet of things (IoT) devices is accelerating rapidly. Particularly, smaller devices are increasingly constrained by hardware limitations that impact their computational capacity, communication bandwidth, and battery longevity. Our research explores a multi-device, multi-access edge computing (MEC) environment within small cells to address the challenges posed by the hardware limitations of IoT devices in this environment. We employ wireless power transfer (WPT) to ensure these IoT devices have sufficient energy for task processing. We propose a system architecture in which an intelligent reflective surface (IRS) is carried by an unmanned aerial vehicle (UAV) to enhance communication conditions. For sustainable energy harvesting (EH), we integrate a normal distribution into the objective function. We utilize a softmax deep double deterministic policy gradients (SD3) algorithm, based on deep reinforcement learning (DRL), to optimize the computational and communication capabilities of IoT devices. Simulation experiments demonstrate that our SD3-based EH edge computing (EHEC-SD3) algorithm surpasses existing DRL algorithms in our explored environments, achieving more than 90% in overall optimization and EH performance.
In recent years, the significant importance of digital data in the Industrial Internet of Things (IIoT) is receiving more and more attention, followed by more copyright violation challenges to the transmission and storage of sensitive data. To address this issue, we propose a generative adversarial network (GAN)-based image watermarking in privacy-preserving split model training. In the first stage, we trained our model in split ways without the client sharing raw data to reduce privacy leakage, if any. In the second stage, we designed a GAN-based watermarking embedder and extraction network to imperceptibly embed sensitive information while enhancing robustness. Moreover, the sensitive mark is jointly encrypted and compressed before sending it to the server, thus protecting user confidentiality while reducing the bandwidth and storage demand. We tested our proposed scheme using multiple standard datasets such as div2k, CelebA, and Flickr. The results on the div2k datasets showed that the proposed method surpassed several state-of-the-art methods, with average PSNR and NC increasing by 47.75% and 26.72% respectively. Our joint encryption and compression method also achieved superior performance compared with other methods with an average NPCR and UACI increasing by 18.25% and 16.87% respectively. To the best of our knowledge, we are the first to explore a GAN-based watermarking in split learning ways for digital images.
With the development of urban rail transit (URT), many latency-sensitive and computationally intensive tasks arise. Edge computing can provide low-latency computing service in URT systems. Edge servers cannot always process all incoming computing tasks in a timely manner when operating independently due to limited computing power resources. They need to collaborate frequently through peer-to-peer offloads. However, it is challenging for the server to select the appropriate computing power resources and corresponding network connections to fulfill its performance and cost requirement. More importantly, edge servers are deployed and managed by different computing departments, putting the task offload process at risk. We propose a blockchain-based computing power sharing system to achieve secure and efficient computing power sharing in URT systems. The blockchain provides auditing and checking functions to guarantee the security of computing power resource sharing. We further propose a method to optimize the computing power sharing strategy and node selection strategy in the computing power sharing workflow. The numerical findings reveal that the proposed scheme provides significant improvements in both departmental utility and business processing capability.
CG-Kit is a new Code Generation tool-Kit that we have developed as a part of the solution for portability and maintainability for multiphysics computing applications. The development of CG-Kit is rooted in the urgent need created by the shifting landscape of high-performance computing platforms and the algorithmic complexities of a particular large-scale multiphysics application: Flash-X. To efficiently use computing resources on a heterogeneous node, an application must have a map of computation to resources and a mechanism to move the data and computation to the resources according to the map. Most existing performance portability solutions are focussed on abstracting the expression of computations so that a unified source code can be specialized to run on different resources. However, such an approach is insufficient for a code like Flash-X, which has a multitude of code components that can be assembled in various permutations and combinations to form different instances of applications. Similar challenges apply to any code that has composability, where a single specified way of apportioning work among devices may not be optimal. Additionally, use cases arise where the optimal control flow of computation may differ for different devices while the underlying numerics remain identical. This combination leads to unique challenges including handling an existing large code base in Fortran and/or C/C++, subdivision of code into a great variety of units supporting a wide range of physics and numerical methods, different parallelization techniques for distributed and shared memory systems and accelerator devices, and heterogeneity of computing platforms requiring coexisting variants of parallel algorithms. All of these challenges demand that scientific software developers apply existing knowledge about domain applications, algorithms, and computing platforms to determine custom abstractions and granularity for code generation. There is a critical lack of tools to tackle those problems. CG-Kit is designed to fill this gap by providing a user with the ability to express their desired control flow and computation-to-resource map in the form a pseudocode-like recipe. It consists of standalone tools that can be combined into highly specific and, we argue, highly effective portability and maintainability toolchains. Here we present the design of our new tools: parametrized source trees, control flow graphs, and recipes. The tools are implemented in Python. They are agnostic to the programming language of the source code targeted for code generation. We demonstrate the capabilities of the toolkit with two examples, first, multithreaded variants of the basic AXPY operation, and second, variants of parallel algorithms within a hydrodynamics solver, called Spark, from Flash-X that operates on block-structured adaptive meshes.
Vehicular communication systems can provide two types of communications: Vehicle-to-Infrastructure (V2I) and Vehicle-to-Vehicle (V2V). However, in both cases, there is zero-trust between the communicating entities. This may give privilege to the unauthorized vehicles to join the network. Hence, a strong authentication protocol is required to ensure proper access control and communication security. In traditional protocols, such tasks are typically accomplished via a central Trusted Authority (TA). However, communication with TA may increase the overall authentication delay. Such delay may be incompatible with the future generation vehicular communication systems, where dense deployment of small-cells are required to ensure higher system capacity and seamless mobility (e.g., 5G onward). Further, TA may suffer from denial-of-service when the number of access requests becomes excessively large, because each request must be forwarded to TA for authentication and access control. In this article, we put forward an efficient authentication protocol without trusted authority for zero-trust vehicular communication systems, called ZeroVCS. It does not involve TA for authentication and access control, thus improving the authentication delay, reducing the chance of denial-of-service, and ensuring compatibility with the future generation vehicular communication systems. ZeroVCS can also provide communication security under various passive and active attacks. Finally, the performance-based comparison proves the efficiency of ZeroVCS.
With the widespread application of Internet of Things (IoT) technology, there has been a shift from a broad-brush to a more refined approach in traffic optimization. An increasing amount of IoT data is being utilized in trajectory mining and inference, offering more precise characteristic information for optimizing public transportation. Services that optimize public transit based on inferred travel characteristics can enhance the appeal of public transport, increase its likelihood as a travel choice, alleviate traffic congestion, and reduce carbon emissions. However, the inherent complexities of disorganized and unstructured public transportation data pose significant challenges to extracting travel features. This study explores the enhancement of bus travel by integrating advanced technologies like positioning systems, IoT, and AI to infer features in public transportation data. It introduces the MK-LDA (MeanShift Kmeans Latent Dirichlet Allocation), a novel thematic modeling technique for deducing characteristics of public transit travel using limited travel trajectory data. The model employs a segmented inference methodology, initially leveraging the Mean-shift clustering algorithm to create POI seeds, followed by the P-K-means algorithm for discerning patterns in user travel behavior and extracting travel modalities. Additionally, a P-LDA (POI-Latent Dirichlet Allocation) inference algorithm is proposed to examine the interplay between travel characteristics and behaviors, specifically targeting attributes significantly correlated with public transit usage, including age, occupation, gender, activity levels, cost, safety, and personality traits. Empirical validation highlights the efficacy of this thematic modeling-based inference technique in identifying and predicting travel characteristics and patterns, boasting enhanced interpretability and outperforming conventional benchmarks.
The increased performance requirements of applications running on safety-critical systems have led to the use of complex platforms with several CPUs, GPUs, and AI accelerators. However, higher platform and system complexity challenge performance verification and validation since timing interference across tasks occurs in unobvious ways, hence defeating attempts to optimize application consolidation informedly during design phases and validating that mutual interference across tasks is within bounds during test phases.
In that respect, the SafeSU has been proposed to extend inter-task interference monitoring capabilities in simple systems. However, modern mixed-criticality systems are complex, with multilayered interconnects, shared caches, and hardware accelerators. To that end, this paper proposes a non-intrusive add-on approach for monitoring interference across tasks in multilayer heterogeneous systems implemented by leveraging existing security frameworks and the SafeSU infrastructure.
The feasibility of the proposed approach has been validated in an RTL RISC-V-based multicore SoC with support for AI hardware acceleration. Our results show that our approach can safely track contention and properly break down contention cycles across the different sources of interference, hence guiding optimization and validation processes.
Detecting vulnerabilities in source code using graph neural networks (GNN) has gained significant attention in recent years. However, the detection performance of these approaches relies highly on the graph structure, and constructing meaningful graphs is expensive. Moreover, they often operate at a coarse level of granularity (such as function-level), which limits their applicability to other scripting languages like Python and their effectiveness in identifying vulnerabilities. To address these limitations, we propose DetectVul, a new approach that accurately detects vulnerable patterns in Python source code at the statement level. DetectVul applies self-attention to directly learn patterns and interactions between statements in a raw Python function; thus, it eliminates the complicated graph extraction process without sacrificing model performance. In addition, the information about each type of statement is also leveraged to enhance the model’s detection accuracy. In our experiments, we used two datasets, CVEFixes and Vudenc, with 211,317 Python statements in 21,571 functions from real-world projects on GitHub, covering seven vulnerability types. Our experiments show that DetectVul outperforms GNN-based models using control flow graphs, achieving the best F1 score of 74.47%, which is 25.45% and 18.05% higher than the best GCN and GAT models, respectively.