The traditional cloud computing model struggles to efficiently handle the vast number of Internet of Things (IoT) services due to its centralized nature and physical distance from end-users. In contrast, edge and fog computing have emerged as promising solutions for supporting latency-sensitive IoT applications by distributing computational resources closer to the data source. However, these technologies are limited by their size and computational capacities, making optimal service placement a critical challenge. This paper addresses this challenge by introducing a dynamic and distributed service placement policy tailored for edge and fog environments. By leveraging the inherent advantages of proximity in fog and edge nodes, the proposed policy seeks to enhance service delivery efficiency, reduce latency, and improve resource utilization. The proposed method focuses on optimizing the placement of high-demand services closer to the data generation sources to enhance scheduling efficiency in fog computing environments. Our method is divided into three interconnected modules. The first module is the service type estimator, which is responsible for distributing services to appropriate nodes. Here, we use the Political Optimizer (PO) as a new metaheuristic algorithm for deploying IoT services. The second module is service dependency estimator, which manages service dependencies. Here, we load dependent services near the data using a path matrix based on the Warshall algorithm. Finally, the third module is resource demand scheduling, which estimates resource demand to facilitate optimal scheduling. Here, we use a popularity-based policy to manage resource demand and service execution scheduling. Implementation results demonstrate significant improvements over existing state-of-the-art policies, highlighting the efficacy of the proposed policy in enhancing service delivery within fog-edge computing frameworks.
Cloud computing is a method of providing various computing services, including software, hardware, databases, data storage, and infrastructure, to the public through the Internet. The rapid expansion of cloud computing services has raised significant concerns over their environmental impact. Cloud computing services should be designed in a green manner, efficient in energy consumption, virtualized, consolidated, and eco-friendly. Green Cloud Computing (GCC) is a significant field of study that focuses on minimizing the environmental impact and energy usage of cloud infrastructures. This survey provides a comprehensive overview of the current state of GCC, focusing on the challenges, strategies, and future directions. The review study begins by identifying important challenges in GCC from practical implementations, identifying GCC-introduced environmental protection and prevention initiatives, and expressing the demand for long-term technical progression. It then addresses GCC’s primary concerns, such as energy efficiency, resource management, operational costs, and carbon emissions, and categorizes implementations according to algorithms, architectures, frameworks, general issues, and models and methodologies. Furthermore, enhancements in virtualization, multi-tenancy, and consolidation have been identified, analyzed, and accurately portrayed to address the advancements in GCC. Finally, the survey outlines future research directions and opportunities for advancing the field of GCC, including the development of novel algorithms, technologies for energy harvesting, and energy-efficient and eco-friendly solutions. By providing a comprehensive overview of GCC, this survey aims to serve as documentation for further evolving new emerging technological approaches in the GCC environment.
Currently, there is a transition in the energy matrix around the world, where traditional sources of energy generation are continually being replaced by energy generation systems based on renewable sources to mitigate the climate crisis. In this bias, this work presents the mathematical modeling of an LLCL filter, used to connect power generation systems based on renewable energy sources to the electrical grid, and presents a novel hybrid fixed-and adaptive gains control strategy for current injection into the grid using this system. The developed hybrid controller is composed of a proportional–integral controller and a direct robust adaptive controller. The first term of the controller guarantees the reference current, while the second term of the controller is used for disturbance rejection. Furthermore, a systematic procedure for the controller’s parametrization based on Grey Wolf Optimizer is also provided. The control of the current injected into the grid is carried out considering the LLCL filter without passive damping resistors in the filter structure to avoid power losses due to the passive filter elements. Additionally, the LLCL filter model considers minimal parasitic resistances to evaluate the controller’s performance and optimize it for the application of interest, aiming to maximize the system performance by ensuring a short transient regime due to the fast closed-loop system response. Simulation results indicate high performance of this optimized control strategy with small tracking error even considering grid impedance variations.
An adaptive metaheuristic optimization-based QoS-aware, Energy-balancing, Secure Routing Protocol (AQoS-ESRP) is proposed in this article. The network is modelled as a biconcentric hexagon (BiCon-HexA), and the clusters are formed within the BiCon-HexA network. The BiCon-HexA is divided into six sectors to support effective data aggregation, and then clusters are formed within all sectors. The optimal cluster head (CH) selection mechanism is modelled by an Adaptive Hunter-Prey Optimization (AdapH-PO) algorithm considering QoS parameters. Data aggregation is then done securely with an enhanced encryption approach. Here, upgraded elliptic curve cryptography (UEllip-CC) is used to encode data in CH. This UEllip-CC approach provides security improvements in data transmission. Furthermore, in this study, CHs are combined in the multi-hop routing of data packets to reduce the power consumption problems of wireless sensor networks (WSN). To determine the optimal route for data transmission, an energy-balanced multi-path routing algorithm called improved convolutional osprey network (ICON) is presented. Nevertheless, the data transmission nodes can be overloaded in the data routing phase. Here, the congestion problem can be solved by applying an improved version of the Random Early Detection (RED) congestion control model to discard the data packets more noticeably. The simulation of AQoS-ESRP is done with Matlab, and the performance is evaluated using different metrics. When compared to existing systems, the simulation results clearly indicate a significantly higher throughput and delay. Thus, the AQoS ESRP model is employed to maximize the overall data transfer in the WSN.
Virtual machine placement (VMP) is a popular problem in Cloud Data Centers (CDCs). An efficient virtual machine (VM) allocation is essential for processor speed and energy saving. This is more useful where the CDC uses an Internet of Things (IoT) infrastructure. To enhance energy savings, we aim to improve the adaptive four thresholds energy-aware framework for VM deployment. We observed that the role of the threshold for identifying the over-loaded host is crucial. In order to determine the appropriate threshold, we employed density-based spatial clustering of applications with noise (DBSCAN), medium absolute deviation (MAD), and interquartile range (IQR) using the medium fit power efficient decreasing (MFPED) algorithm. Our proposed algorithm modified medium fit energy efficient decreasing (MMFEED) achieves a reduction in energy consumption of 47.3%, 46.1%, 39%, 23.2%, 10.9%, and 3.4% compared to the IQR, MAD, static threshold (THR), exponential weighted moving average (EWMA), modified energy-efficient virtual machine placement (MEEVMP), and adaptive four threshold energy-aware framework for VM deployment energy efficient (AFED-EF), respectively, under the minimum migration time (MMT) selection policy. The proposed algorithm outperforms these algorithms in terms of energy consumption for VM selection policy MMT.
Cloud computing is the foremost technology that reliably connects end-to-end users. Task scheduling is a critical process affecting the performance enhancement of cloud computing. The scheduling of the enormous data results in increased response time, makespan time, and makes the system less efficient. Therefore, a unique Squirrel Search-based AlexNet Scheduler (SSbANS) is created for adequate scheduling and performance enhancement in cloud computing suitable for collaborative learning. The system processes the tasks that the cloud users request. Initially, the priority of each task is checked and arranged. Moreover, the optimal resource is selected using the fitness function of the squirrel search, considering the data rate and the job schedule. Further, during the scheduled task-sharing process, the system continuously checks for overloaded resources and balances based on the squirrel distribution function. The efficacy of the model is reviewed in terms of response time, resource usage, makespan time, and throughput. The model achieved a higher throughput and resource usage rate with a lower response and makespan time.
Single clustering protocols cannot meet the event-driven and time-triggered traffic requirements of Cognitive Radio Sensor Networks (CRSNs). The long wait between the completion of events and the process of clustering and searching for accessible routes results in increased time for information transmission. This paper proposed a Hybrid Boosted Chameleon and Modified Honey Badge optimization Algorithm-based Energy Efficient cluster routing protocol (HBCMHBOA) for handling the issues of traffic driven information transfer with energy efficiency in the CRSNs. This HBCMHBOA is proposed as one among few event-driven and time-triggered clustering protocol for the requirements of CRSNs. The integration of Boosted Chameleon and Modified Honey Badge optimization Algorithm is adopted for determining optimal number of clusters and constructs the structure of primitive clusters in an automated way to serve the time-triggered traffic in a periodic manner. It adopted priority-based schedule and its associated frame structure for guaranteeing reliable event-driven information delivery. It leveraged the merits of time-triggering for the construction of clustering architecture and confirmed than none of the cluster construction and selection of routes are facilitated after the emergent events. This characteristic helps in permitting only the nodes and their associated Cluster Heads (CHs) of CRSNs to discover emergent events. It facilitates the coverage of a fewer nodes, especially when sink is positioned in a corner to minimize the delay and node energy consumption. The simulation results of the proposed HBCMHBOA confirmed a reduction in total energy consumption and number of covered nodes on an average of 34.12 %, and 26.89 % than the prevailing studies.
Smart cities represent the future of urban evolution, characterized by the intricate integration of the Internet of Things (IoT). This integration sees everything, from traffic management to waste disposal, governed by interconnected and digitally managed systems. As fascinating as the promise of such cities is, they have its challenges. A significant concern in this digitally connected realm is the introduction of fake clients. These entities, masquerading as legitimate system components, can execute a range of cyber-attacks. This research focuses on the issue of fake clients by devising a detailed simulated smart city model utilizing the Netsim program. Within this simulated environment, multiple sectors collaborate with numerous clients to optimize performance, comfort, and energy conservation. Fake clients, who appear genuine but with malicious intentions, are introduced into this simulation to replicate the real-world challenge. After the simulation is configured, the data flows are captured using Wireshark and saved as a CSV file, differentiating between the real and fake clients. We applied MATLAB machine learning techniques to the captured data set to address the threat these fake clients posed. Various machine learning algorithms were tested, and the k-nearest neighbors (KNN) classifier showed a remarkable detection accuracy of 98 77%. Specifically, our method increased detection accuracy by 4.66%, from 94.02% to 98.68% over three experiments conducted, and enhanced the Area Under the Curve (AUC) by 0.49%, reaching 99.81%. Precision and recall also saw substantial gains, with precision improving by 9.09%, from 88.77% to 97.86%, and recall improving by 9.87%, from 89.23% to 99.10%. The comprehensive analysis underscores the role of preprocessing in enhancing the overall performance, highlighting its superior performance in detecting fake IoT clients in smart city environments compared to conventional approaches. Our research introduces a powerful model for protecting smart cities, merging sophisticated detection techniques with robust defenses.