Cloud users require a set of specific computing needs for their applications, while cloud providers offer a variety of computing products and services on the Internet. These two cloud players make deals through the use of service level agreements (SLAs) where, for instance, prices and levels of quality of service (QoS) are defined. From the cloud user’s point of view, building a robust set of SLAs becomes a challenging problem when multiple cloud providers are present in the market. The allocation of resources in the cloud to run complex applications with guaranteed reliable, secure and acceptable response times is not an easy task, and this paper aims to tackle this problem. This work describes a resource allocation service that aims to optimize the user’s request of cloud resources (virtual machines — VMs) onto multiple Infrastructure-as-a-Service (IaaS) cloud providers. The Resource-Allocation-as-a-Service (RAaaS) proposed in this paper works as a standalone service between cloud users and cloud providers, and it relies on three different requirements: reliability, processing, and mutual trust. The proposed resource allocation service is carried out using the three very common types of VM billing models: on-demand, reserved and spot, where the spot cost model is employed to furnish low-cost resources for the application allocation to improve its reliability. The contributions of this paper are threefold: (i) a three-dimension SLA encompassing reliability, processing, and trust; (ii) an integer linear program (ILP) to schedule cloud-based VMs to applications considering the three-dimension SLA model, and (iii) a heuristic algorithm to mitigate possible QoS violations. Experimental results show that the proposed RAaaS procedure is capable of optimizing resource allocation considering multiple criteria in the SLA while mitigating the extra costs introduced by mutual trust between customers using redundant spot instances allocation.
In the dynamic field of the Industrial Internet of Things (IIoT), the networks are increasingly vulnerable to a diverse range of cyberattacks. This vulnerability necessitates the development of advanced intrusion detection systems (IDSs). Addressing this need, our research contributes to the existing cybersecurity literature by introducing an optimized Intrusion Detection System based on Deep Transfer Learning (DTL), specifically tailored for heterogeneous IIoT networks. Our framework employs a tri-layer architectural approach that synergistically integrates Convolutional Neural Networks (CNNs), Genetic Algorithms (GA), and bootstrap aggregation ensemble techniques. The methodology is executed in three critical stages: First, we convert a state-of-the-art cybersecurity dataset, Edge_IIoTset, into image data, thereby facilitating CNN-based analytics. Second, GA is utilized to fine-tune the hyperparameters of each base learning model, enhancing the model’s adaptability and performance. Finally, the outputs of the top-performing models are amalgamated using ensemble techniques, bolstering the robustness of the IDS. Through rigorous evaluation protocols, our framework demonstrated exceptional performance, reliably achieving a 100% attack detection accuracy rate. This result establishes our framework as highly effective against 14 distinct types of cyberattacks. The findings bear significant implications for the ongoing development of secure, efficient, and adaptive IDS solutions in the complex landscape of IIoT networks.
This paper proposes an enhanced method for localizing sensor nodes in wireless sensor networks with obstacles. Such environment settings lead to lower localization accuracy because locations are estimated based on detour distances circumventing the obstacles; we, therefore, improve the segmentation technique to address the issue as they divide the whole area into multiple smaller ones, each containing fewer or no obstacles. Nevertheless, when radio transmissions between sensor nodes are obstructed (as simulated by the radio irregularity model), the signal-strength variation tends to be high, reducing localization accuracy; thus, we provide a method for accurately approximating the distances between pairs of an anchor node (whose location is known) and an unknown node by incorporating the related error into the approximation process. Additionally, when the nodes with unknown locations are outside the polygon formed by the anchor nodes, the search area for localization is relatively large, resulting in lower accuracy and a longer search time; we then propose a method for reducing the size of approximation areas by forming boundaries based on the two intersection points between the ranges of two anchor nodes used to localize an unknown node. However, these reduced search areas could still be large; we further increase the accuracy of the PSO location estimation method by adaptively adjusting the number of particles. In addition, with PSO, the accuracy of unknown node location estimation depends on having a properly selected fitness function; therefore, we incorporate appropriate variables to reflect the distance approximation accuracy between each anchor-unknown node pair. In experiments, we measure performance in sensor node deployment areas of three different shapes: C-shaped, with 1 hole, and with 2 rectangular holes. The results show that our method provides higher localization accuracy than others in small-, medium-, and large-scaled WSNs. Specifically, our proposed method is 27.46%, 49.28%, 50.33%, and 74.62% more accurate on average than IDE-NSL, PSO–C, min-max PSO, and niching PSO, respectively.
Currently, the crowdsourcing system has serious problems such as single point of failure of the server, leakage of user privacy, unfair arbitration, etc. By storing the interactions between workers, requesters, and crowdsourcing platforms in the form of transactions on the blockchain, these problems can be effectively addressed. However, the improvement in total computing power on the blockchain is difficult to provide positive feedback to the efficiency of transaction confirmation, thereby limiting the performance of crowdsourcing systems. On the other hand, the increasing amount of data in blockchain further increases the difficulty of nodes participating in consensus, affecting the security of crowdsourcing systems. To address the above problems, in this paper we design a blockchain architecture based on dynamic state sharding, called DSSBD. Firstly, we solve the problems caused by cross sharding transactions and reconfiguration in blockchain state sharding through graph segmentation and relay transactions. Then, we model the optimal block generation problem as a Markov decision process. By utilizing deep reinforcement learning, we can dynamically adjust the number of shards, block spacing, and block size. This approach helps improve both the throughput of the blockchain and the proportion of non-malicious nodes. Security analysis has proven that the proposed DSSBD can effectively resist attacks such as transaction atomic attacks, double spending attacks, sybil attacks, replay attacks, etc. The experimental results show that the crowdsourcing system with the proposed DSSBD has better performance in throughput, latency, balancing, cross-shard transaction proportion, and node reconfiguration proportion, etc., while ensuring security.
In Software-Defined Networks (SDNs), the control plane and data plane communicate for various purposes such as applying configurations and collecting statistical data. While various methods have been proposed to reduce the overhead and enhance the scalability of SDNs, the impact of the transport layer protocol used for southbound communication has not been investigated. Existing SDNs rely on Transmission Control Protocol (TCP) to enforce reliability. In this paper, we show that the use of TCP imposes a considerable overhead on southbound communication, identify the causes of this overhead, and demonstrate how replacing TCP with Quick UDP Internet Connection (QUIC) protocol can enhance the performance of this communication. We introduce the quicSDN architecture to enable southbound communication in SDNs via the QUIC protocol. We present a reference architecture based on the standard, most widely-used protocols by the SDN community and show how the controller and switch are revamped to facilitate this transition. We compare, both analytically and empirically, the performance of quicSDN versus the traditional SDN architecture and confirm the superior performance of quicSDN. Our empirical evaluations in different settings demonstrate that quicSDN lowers communication overhead and message delivery delay by up to 82% and 45%, respectively, compared to SDNs using TCP for their southbound communication.