Pub Date : 2024-07-22DOI: 10.1016/j.suscom.2024.101024
P. Sathishkumar , Narendra Kumar , S. Hrushikesava Raju , D. Rosy Salomi Victoria
Cloud computing is the foremost technology that reliably connects end-to-end users. Task scheduling is a critical process affecting the performance enhancement of cloud computing. The scheduling of the enormous data results in increased response time, makespan time, and makes the system less efficient. Therefore, a unique Squirrel Search-based AlexNet Scheduler (SSbANS) is created for adequate scheduling and performance enhancement in cloud computing suitable for collaborative learning. The system processes the tasks that the cloud users request. Initially, the priority of each task is checked and arranged. Moreover, the optimal resource is selected using the fitness function of the squirrel search, considering the data rate and the job schedule. Further, during the scheduled task-sharing process, the system continuously checks for overloaded resources and balances based on the squirrel distribution function. The efficacy of the model is reviewed in terms of response time, resource usage, makespan time, and throughput. The model achieved a higher throughput and resource usage rate with a lower response and makespan time.
{"title":"An intelligent task scheduling approach for the enhancement of collaborative learning in cloud computing","authors":"P. Sathishkumar , Narendra Kumar , S. Hrushikesava Raju , D. Rosy Salomi Victoria","doi":"10.1016/j.suscom.2024.101024","DOIUrl":"10.1016/j.suscom.2024.101024","url":null,"abstract":"<div><p>Cloud computing is the foremost technology that reliably connects end-to-end users. Task scheduling is a critical process affecting the performance enhancement of cloud computing. The scheduling of the enormous data results in increased response time, makespan time, and makes the system less efficient. Therefore, a unique Squirrel Search-based AlexNet Scheduler (SSbANS) is created for adequate scheduling and performance enhancement in cloud computing suitable for collaborative learning. The system processes the tasks that the cloud users request. Initially, the priority of each task is checked and arranged. Moreover, the optimal resource is selected using the fitness function of the squirrel search, considering the data rate and the job schedule. Further, during the scheduled task-sharing process, the system continuously checks for overloaded resources and balances based on the squirrel distribution function. The efficacy of the model is reviewed in terms of response time, resource usage, makespan time, and throughput. The model achieved a higher throughput and resource usage rate with a lower response and makespan time.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101024"},"PeriodicalIF":3.8,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-20DOI: 10.1016/j.suscom.2024.101023
G. Sathya , C. Balasubramanian
Single clustering protocols cannot meet the event-driven and time-triggered traffic requirements of Cognitive Radio Sensor Networks (CRSNs). The long wait between the completion of events and the process of clustering and searching for accessible routes results in increased time for information transmission. This paper proposed a Hybrid Boosted Chameleon and Modified Honey Badge optimization Algorithm-based Energy Efficient cluster routing protocol (HBCMHBOA) for handling the issues of traffic driven information transfer with energy efficiency in the CRSNs. This HBCMHBOA is proposed as one among few event-driven and time-triggered clustering protocol for the requirements of CRSNs. The integration of Boosted Chameleon and Modified Honey Badge optimization Algorithm is adopted for determining optimal number of clusters and constructs the structure of primitive clusters in an automated way to serve the time-triggered traffic in a periodic manner. It adopted priority-based schedule and its associated frame structure for guaranteeing reliable event-driven information delivery. It leveraged the merits of time-triggering for the construction of clustering architecture and confirmed than none of the cluster construction and selection of routes are facilitated after the emergent events. This characteristic helps in permitting only the nodes and their associated Cluster Heads (CHs) of CRSNs to discover emergent events. It facilitates the coverage of a fewer nodes, especially when sink is positioned in a corner to minimize the delay and node energy consumption. The simulation results of the proposed HBCMHBOA confirmed a reduction in total energy consumption and number of covered nodes on an average of 34.12 %, and 26.89 % than the prevailing studies.
{"title":"Hybrid Boosted Chameleon and modified Honey Badger optimization algorithm-based energy efficient cluster routing protocol for cognitive radio sensor network","authors":"G. Sathya , C. Balasubramanian","doi":"10.1016/j.suscom.2024.101023","DOIUrl":"10.1016/j.suscom.2024.101023","url":null,"abstract":"<div><p>Single clustering protocols cannot meet the event-driven and time-triggered traffic requirements of Cognitive Radio Sensor Networks (CRSNs). The long wait between the completion of events and the process of clustering and searching for accessible routes results in increased time for information transmission. This paper proposed a Hybrid Boosted Chameleon and Modified Honey Badge optimization Algorithm-based Energy Efficient cluster routing protocol (HBCMHBOA) for handling the issues of traffic driven information transfer with energy efficiency in the CRSNs. This HBCMHBOA is proposed as one among few event-driven and time-triggered clustering protocol for the requirements of CRSNs. The integration of Boosted Chameleon and Modified Honey Badge optimization Algorithm is adopted for determining optimal number of clusters and constructs the structure of primitive clusters in an automated way to serve the time-triggered traffic in a periodic manner. It adopted priority-based schedule and its associated frame structure for guaranteeing reliable event-driven information delivery. It leveraged the merits of time-triggering for the construction of clustering architecture and confirmed than none of the cluster construction and selection of routes are facilitated after the emergent events. This characteristic helps in permitting only the nodes and their associated Cluster Heads (CHs) of CRSNs to discover emergent events. It facilitates the coverage of a fewer nodes, especially when sink is positioned in a corner to minimize the delay and node energy consumption. The simulation results of the proposed HBCMHBOA confirmed a reduction in total energy consumption and number of covered nodes on an average of 34.12 %, and 26.89 % than the prevailing studies.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101023"},"PeriodicalIF":3.8,"publicationDate":"2024-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-19DOI: 10.1016/j.suscom.2024.101019
Mahmoud AlJamal , Ala Mughaid , Bashar Al shboul , Hani Bani-Salameh , Shadi Alzubi , Laith Abualigah
Smart cities represent the future of urban evolution, characterized by the intricate integration of the Internet of Things (IoT). This integration sees everything, from traffic management to waste disposal, governed by interconnected and digitally managed systems. As fascinating as the promise of such cities is, they have its challenges. A significant concern in this digitally connected realm is the introduction of fake clients. These entities, masquerading as legitimate system components, can execute a range of cyber-attacks. This research focuses on the issue of fake clients by devising a detailed simulated smart city model utilizing the Netsim program. Within this simulated environment, multiple sectors collaborate with numerous clients to optimize performance, comfort, and energy conservation. Fake clients, who appear genuine but with malicious intentions, are introduced into this simulation to replicate the real-world challenge. After the simulation is configured, the data flows are captured using Wireshark and saved as a CSV file, differentiating between the real and fake clients. We applied MATLAB machine learning techniques to the captured data set to address the threat these fake clients posed. Various machine learning algorithms were tested, and the k-nearest neighbors (KNN) classifier showed a remarkable detection accuracy of 98 77%. Specifically, our method increased detection accuracy by 4.66%, from 94.02% to 98.68% over three experiments conducted, and enhanced the Area Under the Curve (AUC) by 0.49%, reaching 99.81%. Precision and recall also saw substantial gains, with precision improving by 9.09%, from 88.77% to 97.86%, and recall improving by 9.87%, from 89.23% to 99.10%. The comprehensive analysis underscores the role of preprocessing in enhancing the overall performance, highlighting its superior performance in detecting fake IoT clients in smart city environments compared to conventional approaches. Our research introduces a powerful model for protecting smart cities, merging sophisticated detection techniques with robust defenses.
{"title":"Optimizing risk mitigation: A simulation-based model for detecting fake IoT clients in smart city environments","authors":"Mahmoud AlJamal , Ala Mughaid , Bashar Al shboul , Hani Bani-Salameh , Shadi Alzubi , Laith Abualigah","doi":"10.1016/j.suscom.2024.101019","DOIUrl":"10.1016/j.suscom.2024.101019","url":null,"abstract":"<div><p>Smart cities represent the future of urban evolution, characterized by the intricate integration of the Internet of Things (IoT). This integration sees everything, from traffic management to waste disposal, governed by interconnected and digitally managed systems. As fascinating as the promise of such cities is, they have its challenges. A significant concern in this digitally connected realm is the introduction of fake clients. These entities, masquerading as legitimate system components, can execute a range of cyber-attacks. This research focuses on the issue of fake clients by devising a detailed simulated smart city model utilizing the Netsim program. Within this simulated environment, multiple sectors collaborate with numerous clients to optimize performance, comfort, and energy conservation. Fake clients, who appear genuine but with malicious intentions, are introduced into this simulation to replicate the real-world challenge. After the simulation is configured, the data flows are captured using Wireshark and saved as a CSV file, differentiating between the real and fake clients. We applied MATLAB machine learning techniques to the captured data set to address the threat these fake clients posed. Various machine learning algorithms were tested, and the k-nearest neighbors (KNN) classifier showed a remarkable detection accuracy of 98 77%. Specifically, our method increased detection accuracy by 4.66%, from 94.02% to 98.68% over three experiments conducted, and enhanced the Area Under the Curve (AUC) by 0.49%, reaching 99.81%. Precision and recall also saw substantial gains, with precision improving by 9.09%, from 88.77% to 97.86%, and recall improving by 9.87%, from 89.23% to 99.10%. The comprehensive analysis underscores the role of preprocessing in enhancing the overall performance, highlighting its superior performance in detecting fake IoT clients in smart city environments compared to conventional approaches. Our research introduces a powerful model for protecting smart cities, merging sophisticated detection techniques with robust defenses.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101019"},"PeriodicalIF":3.8,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-18DOI: 10.1016/j.suscom.2024.101021
M. Deivakani , M. Sahaya Sheela , K. Priyadarsini , Yousef Farhaoui
The Mobile Ad-hoc Networks (MANETs) have gained a significant attention in the recent years due to their proliferation and huge application purposes. To defend from many types of modern cyber dangers like Distributed Denial of Service (DDoS) attacks, Advanced Persistent Threats (APTs), Insider Threats, Ransomware, Zero-Day Exploits, Social Engineering tactics, and etc is not easy when it comes to keeping MANETs security. These complex assaults focus on network infrastructure, take advantage of weaknesses in communication protocols and control user actions. Although there have been improvements in intrusion detection systems (IDS), it is still difficult to fully safeguard MANETs. The purpose of this research is to create advanced methods that can accurately find and decrease attacks inside MANETs. Applying Sensor-based Feature Extraction (SFE) to extract useful network features such as Received Signal Strength Indication (RSSI) and Time of Travel (TOT) from datasets NSL-KDD and CICIDS-2017. Utilizing the fresh method of Precise Probability Genetic Algorithm (PPGA) optimization for removing unrelated details, which enhances precision in detecting attacks. Predicting normal and attacking labels by applying Stacked Recurrent Long Short Term Memory (SRLSTM) method, fine-tuning classifier's parameters in every layer to improve outcomes. In order to authenticate and compare the suggested methods with current attack detection tactics, this study will make use of various evaluation measurements. The NSL-KDD, which is a benchmark dataset in network intrusion detection research, has a wide variety of network traffic data with instances that are labeled as normal and different attacks. CICIDS-2017 is similar because it contains an extensive dataset too - this includes real-world traces from network traffic where there's both regular activity and harmful actions. The purpose is to enhance the existing status of MANET security so as it can withstand more strongly against cyber dangers. According to the outcomes, it is analyzed that the attack detection accuracy has improved greatly 99 % when compared to other methods, as shown by the detailed assessment measurements. Better handling of big datasets with top detection accuracy reduces the time needed 8.9 s for training and testing models. Decrease in misclassification results and better ability to differentiate normal network actions from harmful intrusions. Improved resistance of MANETs to different cyber dangers, guaranteeing the safety and dependability of network communication in changing and non-centralized settings.
{"title":"An intelligent security mechanism in mobile Ad-Hoc networks using precision probability genetic algorithms (PPGA) and deep learning technique (Stacked LSTM)","authors":"M. Deivakani , M. Sahaya Sheela , K. Priyadarsini , Yousef Farhaoui","doi":"10.1016/j.suscom.2024.101021","DOIUrl":"10.1016/j.suscom.2024.101021","url":null,"abstract":"<div><p>The Mobile Ad-hoc Networks (MANETs) have gained a significant attention in the recent years due to their proliferation and huge application purposes. To defend from many types of modern cyber dangers like Distributed Denial of Service (DDoS) attacks, Advanced Persistent Threats (APTs), Insider Threats, Ransomware, Zero-Day Exploits, Social Engineering tactics, and etc is not easy when it comes to keeping MANETs security. These complex assaults focus on network infrastructure, take advantage of weaknesses in communication protocols and control user actions. Although there have been improvements in intrusion detection systems (IDS), it is still difficult to fully safeguard MANETs. The purpose of this research is to create advanced methods that can accurately find and decrease attacks inside MANETs. Applying Sensor-based Feature Extraction (SFE) to extract useful network features such as Received Signal Strength Indication (RSSI) and Time of Travel (TOT) from datasets NSL-KDD and CICIDS-2017. Utilizing the fresh method of Precise Probability Genetic Algorithm (PPGA) optimization for removing unrelated details, which enhances precision in detecting attacks. Predicting normal and attacking labels by applying Stacked Recurrent Long Short Term Memory (SRLSTM) method, fine-tuning classifier's parameters in every layer to improve outcomes. In order to authenticate and compare the suggested methods with current attack detection tactics, this study will make use of various evaluation measurements. The NSL-KDD, which is a benchmark dataset in network intrusion detection research, has a wide variety of network traffic data with instances that are labeled as normal and different attacks. CICIDS-2017 is similar because it contains an extensive dataset too - this includes real-world traces from network traffic where there's both regular activity and harmful actions. The purpose is to enhance the existing status of MANET security so as it can withstand more strongly against cyber dangers. According to the outcomes, it is analyzed that the attack detection accuracy has improved greatly 99 % when compared to other methods, as shown by the detailed assessment measurements. Better handling of big datasets with top detection accuracy reduces the time needed 8.9 s for training and testing models. Decrease in misclassification results and better ability to differentiate normal network actions from harmful intrusions. Improved resistance of MANETs to different cyber dangers, guaranteeing the safety and dependability of network communication in changing and non-centralized settings.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101021"},"PeriodicalIF":3.8,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.1016/j.suscom.2024.101015
R. Elavarasan , A. Rajaram
In this study, we use a topology control technique to tackle the issue of energy balance and consumption minimization in wireless sensor networks. By maintaining network connection while sensibly adjusting the transmission power level, such an algorithm may reduce and balance energy usage. This study provides an energy welfare topological control using a game-theoretic approach by calculating energy welfare as usefulness metric for energy populations using the welfare function from the social sciences. Energy balance occurs when every node works to improve its local society's energy situation to the best of its ability. We demonstrate that the consequence ant game is an intriguing game with a single Nash equilibrium that is Pareto optimum. According to economic theory, Pareto optimality is a situation in which improving one person's circumstances would always make another person's worse off. We demonstrate our suggested methodology's superiority in establishing energy balance and efficiency in wireless sensor networks by contrasting the simulation results of our algorithm with those of other approaches. Our approach surpasses existing methods by a wide margin. For reliable and long-lasting wireless sensor applications, this study offers insightful information about how to maximize network performance while preserving energy resources.
{"title":"Distributed clustering model for energy efficiency based topology control using game theory in wireless sensor networks","authors":"R. Elavarasan , A. Rajaram","doi":"10.1016/j.suscom.2024.101015","DOIUrl":"10.1016/j.suscom.2024.101015","url":null,"abstract":"<div><p>In this study, we use a topology control technique to tackle the issue of energy balance and consumption minimization in wireless sensor networks. By maintaining network connection while sensibly adjusting the transmission power level, such an algorithm may reduce and balance energy usage. This study provides an energy welfare topological control using a game-theoretic approach by calculating energy welfare as usefulness metric for energy populations using the welfare function from the social sciences. Energy balance occurs when every node works to improve its local society's energy situation to the best of its ability. We demonstrate that the consequence ant game is an intriguing game with a single Nash equilibrium that is Pareto optimum. According to economic theory, Pareto optimality is a situation in which improving one person's circumstances would always make another person's worse off. We demonstrate our suggested methodology's superiority in establishing energy balance and efficiency in wireless sensor networks by contrasting the simulation results of our algorithm with those of other approaches. Our approach surpasses existing methods by a wide margin. For reliable and long-lasting wireless sensor applications, this study offers insightful information about how to maximize network performance while preserving energy resources.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101015"},"PeriodicalIF":3.8,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-14DOI: 10.1016/j.suscom.2024.101020
Gholamreza Abdi , Mehdi Ahmadi Jirdehi , Hasan Mehrjerdi
The emergence of distributed generation from renewable energy sources has led to the adoption microgrids as an alternative energy solution. However, implementing microgrids presents challenges, particularly in coordinating relay protection, due to factors like distributed generation sources, bidirectional power flow, variable short-circuit levels, and changes in network behavior. Although overcurrent relays (OCR) are frequently utilized in microgrid protection, a more adaptable strategy is needed as grid architectures transition from radial to non-radial. This paper proposes a new method to optimize the coordination of OCRs in microgrids by adjusting parameters like time multiplier settings (TMS), plug settings (PS), and characteristic curve selection. The study utilizes meta-heuristic techniques such as the harmony search algorithm (HSA) and the non-dominated sorting genetic algorithm-II (NSAGA-II) for optimal coordination. Simulations on a microgrid and bus test system demonstrate the effectiveness of the proposed approach in enhancing protection indicators like sensitivity, speed, selectivity, and reliability in microgrid operations. The results also indicate that the computation time of HSA is less than NSGA-II, but with an increase in DGs capacity, there is a continuous tendency to reduce the relay operation time.
{"title":"Optimal coordination of overcurrent relays in microgrids using meta-heuristic algorithms NSGA-II and harmony search","authors":"Gholamreza Abdi , Mehdi Ahmadi Jirdehi , Hasan Mehrjerdi","doi":"10.1016/j.suscom.2024.101020","DOIUrl":"10.1016/j.suscom.2024.101020","url":null,"abstract":"<div><p>The emergence of distributed generation from renewable energy sources has led to the adoption microgrids as an alternative energy solution. However, implementing microgrids presents challenges, particularly in coordinating relay protection, due to factors like distributed generation sources, bidirectional power flow, variable short-circuit levels, and changes in network behavior. Although overcurrent relays (OCR) are frequently utilized in microgrid protection, a more adaptable strategy is needed as grid architectures transition from radial to non-radial. This paper proposes a new method to optimize the coordination of OCRs in microgrids by adjusting parameters like time multiplier settings (TMS), plug settings (PS), and characteristic curve selection. The study utilizes meta-heuristic techniques such as the harmony search algorithm (HSA) and the non-dominated sorting genetic algorithm-II (NSAGA-II) for optimal coordination. Simulations on a microgrid and bus test system demonstrate the effectiveness of the proposed approach in enhancing protection indicators like sensitivity, speed, selectivity, and reliability in microgrid operations. The results also indicate that the computation time of HSA is less than NSGA-II, but with an increase in DGs capacity, there is a continuous tendency to reduce the relay operation time.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101020"},"PeriodicalIF":3.8,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141623201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of 5 G mobile users are increasing massively. Some mobile applications like healthcare are latency-critical and requires real-time data processing. A preference-based task offloading framework in mobile edge computing with a device-to-device offloading (MECD2D) system has been proposed to fulfill the latency demands of such applications for minimum energy consumption ensuring resiliency. The problem is formulated as a constraint-based non-linear optimization problem which is complex. The resources are allocated in two steps. In the first step, resources are allocated based on latency demand to ensure resiliency. In the second step, allocated resources are optimized using a non-cooperative mean field game for dynamic system. To ensure the performance of the system for dynamic network, the results are executed on a real-time Shanghai dataset. The computational results indicate that the proposed algorithm performs better in terms of energy consumption. Other parameters such as throughput, network utilization and task computation are also analysed. The results are verified by performing the proposed algorithm with existing Q learning and mean-field game algorithms. The results performed on the dataset indicate an improvement in energy consumption by 5–10 %, and 10–50 % as compared to Q learning and mean-field game respectively.
随着 5 G 移动技术的发展,移动用户正在大量增加。一些移动应用(如医疗保健)对延迟要求很高,需要实时数据处理。在移动边缘计算中提出了一种基于偏好的任务卸载框架,即设备到设备卸载(MECD2D)系统,以满足这类应用对延迟的要求,同时确保最低能耗和弹性。该问题被表述为一个复杂的基于约束的非线性优化问题。资源分配分为两步。第一步,根据延迟需求分配资源,以确保弹性。第二步,利用动态系统的非合作均值场博弈对分配的资源进行优化。为确保系统在动态网络中的性能,在上海实时数据集上执行了计算结果。计算结果表明,所提出的算法在能耗方面表现更好。此外,还分析了吞吐量、网络利用率和任务计算量等其他参数。通过将提出的算法与现有的 Q 学习算法和均场博弈算法进行比较,对结果进行了验证。数据集上的结果表明,与 Q 学习算法和均值场博弈算法相比,该算法的能耗分别降低了 5%-10%和 10%-50%。
{"title":"Task offloading framework to meet resiliency demand in mobile edge computing system","authors":"Aakansha Garg , Rajeev Arya , Maheshwari Prasad Singh","doi":"10.1016/j.suscom.2024.101018","DOIUrl":"10.1016/j.suscom.2024.101018","url":null,"abstract":"<div><p>With the development of 5 G mobile users are increasing massively. Some mobile applications like healthcare are latency-critical and requires real-time data processing. A preference-based task offloading framework in mobile edge computing with a device-to-device offloading (MECD2D) system has been proposed to fulfill the latency demands of such applications for minimum energy consumption ensuring resiliency. The problem is formulated as a constraint-based non-linear optimization problem which is complex. The resources are allocated in two steps. In the first step, resources are allocated based on latency demand to ensure resiliency. In the second step, allocated resources are optimized using a non-cooperative mean field game for dynamic system. To ensure the performance of the system for dynamic network, the results are executed on a real-time Shanghai dataset. The computational results indicate that the proposed algorithm performs better in terms of energy consumption. Other parameters such as throughput, network utilization and task computation are also analysed. The results are verified by performing the proposed algorithm with existing Q learning and mean-field game algorithms. The results performed on the dataset indicate an improvement in energy consumption by 5–10 %, and 10–50 % as compared to Q learning and mean-field game respectively.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101018"},"PeriodicalIF":3.8,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-04DOI: 10.1016/j.suscom.2024.101017
Abhilasha Pawar , R.K. Viral , Mohit Bansal
Aim
In recent years, renewable distributed generation (DG) has grown to deliver sustainable electricity with minimal environmental impact. However, renewable DG poses new provocation in the distribution system expansion planning problem (DSEP). To address those problems, this paper suggests a new mathematical model for distribution system expansion plans using a novel hybrid optimization strategy.
Method
The optimal expansion plan for the distribution system is achieved using the hybrid optimization model, named Squirrel Search Insisted Cat Swarm Optimization (SSI-CS) algorithm. The proposed hybrid optimization algorithm is developed with the incorporation of the characteristics features of Squirrel search optimization (SSA) algorithm and Cat Swarm Optimization (CSO) Algorithm to optimize the solar capacity, wind capacity and biomass capacity. This combination aims to strike a balance between global and local optimization, ultimately leading to better cost-effective results. The Distribution systems variables like DG type, size/capacity, location, real power, reactive power, and the solar and wind capacity during load demand uncertainty act as the input to the proposed hybrid optimization algorithm. The main objective of attaining minimal cost for the expansion plan of the distribution system is checked, and the cycle is repeated until obtaining the optimal solution (minimum cost).
Result
The experimental analysis using an IEEE-33 bus system with 5 system states is executed in MATLAB/Simulink. The suggested SSI-CS model attained a minimal operational cost of 7433.4 which better than GWO, SSA and CSO.
Conclusion
Hence, the proposed SSI-CS shows promise as an efficient and effective approach for distribution system expansion planning.
{"title":"A novel squirrel-cat optimization based optimal expansion planning for distribution system","authors":"Abhilasha Pawar , R.K. Viral , Mohit Bansal","doi":"10.1016/j.suscom.2024.101017","DOIUrl":"10.1016/j.suscom.2024.101017","url":null,"abstract":"<div><h3>Aim</h3><p>In recent years, renewable distributed generation (DG) has grown to deliver sustainable electricity with minimal environmental impact. However, renewable DG poses new provocation in the distribution system expansion planning problem (DSEP). To address those problems, this paper suggests a new mathematical model for distribution system expansion plans using a novel hybrid optimization strategy.</p></div><div><h3>Method</h3><p>The optimal expansion plan for the distribution system is achieved using the hybrid optimization model, named Squirrel Search Insisted Cat Swarm Optimization (SSI-CS) algorithm. The proposed hybrid optimization algorithm is developed with the incorporation of the characteristics features of Squirrel search optimization (SSA) algorithm and Cat Swarm Optimization (CSO) Algorithm to optimize the solar capacity, wind capacity and biomass capacity. This combination aims to strike a balance between global and local optimization, ultimately leading to better cost-effective results. The Distribution systems variables like DG type, size/capacity, location, real power, reactive power, and the solar and wind capacity during load demand uncertainty act as the input to the proposed hybrid optimization algorithm. The main objective of attaining minimal cost for the expansion plan of the distribution system is checked, and the cycle is repeated until obtaining the optimal solution (minimum cost).</p></div><div><h3>Result</h3><p>The experimental analysis using an IEEE-33 bus system with 5 system states is executed in MATLAB/Simulink. The suggested SSI-CS model attained a minimal operational cost of 7433.4 which better than GWO, SSA and CSO.</p></div><div><h3>Conclusion</h3><p>Hence, the proposed SSI-CS shows promise as an efficient and effective approach for distribution system expansion planning.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101017"},"PeriodicalIF":3.8,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141698650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-30DOI: 10.1016/j.suscom.2024.101016
V.N. Jayamani , S. Pavai Madheswari , P. Suganthi , S.A. Josephine
The conservation of power in wireless sensor networks (WSNs) is critical due to the difficulty of replacing or recharging batteries in remote sensor nodes. Additionally, sensor node failure is inevitable in WSNs. In order to overcome these difficulties, this study proposes an N-policy M/M/1 queuing system model with an unstable server. This model offers insightful information for improving performance, maximizing energy use, and prolonging the lifespan of WSNs. The study looks into how different parameters and N-values affect the system's performance by examining the average system size under various scenarios. Important performance parameters are taken into consideration in the suggested model, including the total system size and the average number of data packets in the busy, idle, and down stages. These measurements aid in the comprehension of the behavior of the system and direct the N-policy optimization process to reduce setup, holding, server downtime, and operating state expenses. The analytical results are supported by numerical representations that show system size is reduced by higher repair and service rates and increased by higher breakdown rates. The cost function is estimated using MATLAB simulations, which also find the ideal N-value to reduce the overall predicted cost per unit of time. The findings demonstrate that an ideal N-policy can greatly enhance system performance and energy efficiency, guaranteeing the WSN functions well even in the face of node failure and power limitations. Based on in-depth numerical analysis and performance evaluation, this paper offers a thorough framework for improving WSN efficiency and reliability through strategic N-policy implementation.
由于远程传感器节点的电池难以更换或充电,因此在无线传感器网络(WSN)中节约电能至关重要。此外,传感器节点故障在 WSN 中不可避免。为了克服这些困难,本研究提出了一种具有不稳定服务器的 N 策略 M/M/1 队列系统模型。该模型为提高 WSN 性能、最大限度地利用能源和延长 WSN 的寿命提供了具有洞察力的信息。研究通过考察各种情况下的平均系统规模,探讨了不同参数和 N 值对系统性能的影响。建议的模型考虑了重要的性能参数,包括系统总大小以及繁忙、空闲和停机阶段的数据包平均数量。这些测量有助于理解系统的行为,并指导 N 策略优化过程,以减少设置、保持、服务器停机时间和运行状态费用。分析结果得到了数值表示的支持,数值表示显示,系统规模会因维修率和服务率的提高而减小,因故障率的提高而增大。成本函数是通过 MATLAB 仿真估算的,仿真还找到了理想的 N 值,以降低单位时间内的总体预测成本。研究结果表明,理想的 N 策略可以大大提高系统性能和能效,即使在节点故障和功率受限的情况下也能保证 WSN 正常运行。基于深入的数值分析和性能评估,本文提供了一个全面的框架,通过战略性的 N 策略实施来提高 WSN 的效率和可靠性。
{"title":"An N - policy M/M/1 queueing model for energy saving mechanism in Networks","authors":"V.N. Jayamani , S. Pavai Madheswari , P. Suganthi , S.A. Josephine","doi":"10.1016/j.suscom.2024.101016","DOIUrl":"10.1016/j.suscom.2024.101016","url":null,"abstract":"<div><p>The conservation of power in wireless sensor networks (WSNs) is critical due to the difficulty of replacing or recharging batteries in remote sensor nodes. Additionally, sensor node failure is inevitable in WSNs. In order to overcome these difficulties, this study proposes an N-policy M/M/1 queuing system model with an unstable server. This model offers insightful information for improving performance, maximizing energy use, and prolonging the lifespan of WSNs. The study looks into how different parameters and N-values affect the system's performance by examining the average system size under various scenarios. Important performance parameters are taken into consideration in the suggested model, including the total system size and the average number of data packets in the busy, idle, and down stages. These measurements aid in the comprehension of the behavior of the system and direct the N-policy optimization process to reduce setup, holding, server downtime, and operating state expenses. The analytical results are supported by numerical representations that show system size is reduced by higher repair and service rates and increased by higher breakdown rates. The cost function is estimated using MATLAB simulations, which also find the ideal N-value to reduce the overall predicted cost per unit of time. The findings demonstrate that an ideal N-policy can greatly enhance system performance and energy efficiency, guaranteeing the WSN functions well even in the face of node failure and power limitations. Based on in-depth numerical analysis and performance evaluation, this paper offers a thorough framework for improving WSN efficiency and reliability through strategic N-policy implementation.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101016"},"PeriodicalIF":3.8,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142099344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-26DOI: 10.1016/j.suscom.2024.101010
Emanuel Adler Medeiros Pereira, Jeferson Fernando da Silva Santos, Erick de Andrade Barboza
Safe drinking water is an essential resource and a fundamental human right, but its access continues beyond billions of people, posing numerous health risks. A key obstacle in monitoring water quality is managing and analyzing extensive data. Machine learning models have become increasingly prevalent in water quality monitoring, aiding decision makers and safeguarding public health. An integrated system, which combines electronic sensors with a Machine Learning model, offers immediate feedback and can be implemented in any location. This type of system operates independently of an Internet connection and does not depend on data derived from chemical or laboratory analysis. The aim of this study is to develop an energy-efficient TinyML model to classify water potability that operates as an embedded system and relies solely on the data available through electronic sensing. When compared with a similar model functioning in the Cloud, the proposed model requires 51.2% less memory space, performs all inference tests approximately 99.95% faster, and consumes about 99.95% less energy. This increase in performance enables the classification model to run for years in devices that are very resource-constrained.
{"title":"An energy efficient TinyML model for a water potability classification problem","authors":"Emanuel Adler Medeiros Pereira, Jeferson Fernando da Silva Santos, Erick de Andrade Barboza","doi":"10.1016/j.suscom.2024.101010","DOIUrl":"https://doi.org/10.1016/j.suscom.2024.101010","url":null,"abstract":"<div><p>Safe drinking water is an essential resource and a fundamental human right, but its access continues beyond billions of people, posing numerous health risks. A key obstacle in monitoring water quality is managing and analyzing extensive data. Machine learning models have become increasingly prevalent in water quality monitoring, aiding decision makers and safeguarding public health. An integrated system, which combines electronic sensors with a Machine Learning model, offers immediate feedback and can be implemented in any location. This type of system operates independently of an Internet connection and does not depend on data derived from chemical or laboratory analysis. The aim of this study is to develop an energy-efficient TinyML model to classify water potability that operates as an embedded system and relies solely on the data available through electronic sensing. When compared with a similar model functioning in the Cloud, the proposed model requires 51.2% less memory space, performs all inference tests approximately 99.95% faster, and consumes about 99.95% less energy. This increase in performance enables the classification model to run for years in devices that are very resource-constrained.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"43 ","pages":"Article 101010"},"PeriodicalIF":3.8,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141486825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}