Pub Date : 2024-11-05DOI: 10.1016/j.suscom.2024.101049
Jose M. Badia , German Leon , Mario Garcia-Valderas , Jose A. Belloch , Almudena Lindoso , Luis Entrena
This study focuses on the low-power Tegra X1 System-on-Chip (SoC) from the Jetson Nano Developer Kit, which is increasingly used in various environments and tasks. As these SoCs grow in prevalence, it becomes crucial to analyse their computational performance, energy consumption, and reliability, especially for safety-critical applications. A key factor examined in this paper is the SoC’s neutron radiation tolerance. This is explored by subjecting a parallel version of matrix multiplication, which has been offloaded to various hardware components via OpenMP, to neutron irradiation. Through this approach, this researcher establishes a correlation between the SoC’s reliability and its computational and energy performance. The analysis enables the identification of an optimal workload distribution strategy, considering factors such as execution time, energy efficiency, and system reliability. Experimental results reveal that, while the GPU executes matrix multiplication tasks more rapidly and efficiently than the CPU, using both components only marginally reduces execution time. Interestingly, GPU usage significantly increases the SoC’s critical section, leading to an escalated error rate for both Detected Unrecoverable Errors (DUE) and Silent Data Corruptions (SDC), with the CPU showing a higher average number of affected elements per SDC.
{"title":"Analysing the radiation reliability, performance and energy consumption of low-power SoC through heterogeneous parallelism","authors":"Jose M. Badia , German Leon , Mario Garcia-Valderas , Jose A. Belloch , Almudena Lindoso , Luis Entrena","doi":"10.1016/j.suscom.2024.101049","DOIUrl":"10.1016/j.suscom.2024.101049","url":null,"abstract":"<div><div>This study focuses on the low-power Tegra X1 System-on-Chip (SoC) from the Jetson Nano Developer Kit, which is increasingly used in various environments and tasks. As these SoCs grow in prevalence, it becomes crucial to analyse their computational performance, energy consumption, and reliability, especially for safety-critical applications. A key factor examined in this paper is the SoC’s neutron radiation tolerance. This is explored by subjecting a parallel version of matrix multiplication, which has been offloaded to various hardware components via OpenMP, to neutron irradiation. Through this approach, this researcher establishes a correlation between the SoC’s reliability and its computational and energy performance. The analysis enables the identification of an optimal workload distribution strategy, considering factors such as execution time, energy efficiency, and system reliability. Experimental results reveal that, while the GPU executes matrix multiplication tasks more rapidly and efficiently than the CPU, using both components only marginally reduces execution time. Interestingly, GPU usage significantly increases the SoC’s critical section, leading to an escalated error rate for both Detected Unrecoverable Errors (DUE) and Silent Data Corruptions (SDC), with the CPU showing a higher average number of affected elements per SDC.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101049"},"PeriodicalIF":3.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.suscom.2024.101048
A. Saravanaselvan , B. Paramasivan
Recently, the security-based algorithms for energy-constrained sensor nodes are being developed to consume less energy for computation as well as communication. For the mission critical wireless sensor network (WSN) applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. Therefore, in this manuscript, One-Time Pad Cryptographic Algorithm with Huffman Source Coding Based Energy Aware sensor node Design is proposed (EA-SND-OTPCA-HSC). Before transmission, the distance among transmitter and receiver is rated available in mission critical WSN for lessen communication energy consume of sensor node. For the mission critical WSN applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. The periodic sleep/wake up scheme with Huffman source coding algorithm is used to save energy at the node level. Then, one-time pad cryptographic algorithm in each sensor node, the vernam cipher encryption technique is applied to the compact payload. The proposed technique is executed and efficacy of proposed method is assessed using Payload Vs Energy consume for one sensor node, communication energy consume for one sensor node with different distances, energy consume for one sensor node under various methods, Throughput, delay and Jitter are analyzed. Then the proposed method provides 90.12 %, 89.78 % and 91.78 % lower delay and 88.25 %, 95.34 % and 94.12 % lesser energy consumption comparing to the existing EA-SND-Hyb-MG-CUF, EA-SND-PVEH and EA-SND-PIA techniques respectively.
{"title":"An one-time pad cryptographic algorithm with Huffman Source Coding based energy aware sensor node design","authors":"A. Saravanaselvan , B. Paramasivan","doi":"10.1016/j.suscom.2024.101048","DOIUrl":"10.1016/j.suscom.2024.101048","url":null,"abstract":"<div><div>Recently, the security-based algorithms for energy-constrained sensor nodes are being developed to consume less energy for computation as well as communication. For the mission critical wireless sensor network (WSN) applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. Therefore, in this manuscript, One-Time Pad Cryptographic Algorithm with Huffman Source Coding Based Energy Aware sensor node Design is proposed (EA-SND-OTPCA-HSC). Before transmission, the distance among transmitter and receiver is rated available in mission critical WSN for lessen communication energy consume of sensor node. For the mission critical WSN applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. The periodic sleep/wake up scheme with Huffman source coding algorithm is used to save energy at the node level. Then, one-time pad cryptographic algorithm in each sensor node, the vernam cipher encryption technique is applied to the compact payload. The proposed technique is executed and efficacy of proposed method is assessed using Payload Vs Energy consume for one sensor node, communication energy consume for one sensor node with different distances, energy consume for one sensor node under various methods, Throughput, delay and Jitter are analyzed. Then the proposed method provides 90.12 %, 89.78 % and 91.78 % lower delay and 88.25 %, 95.34 % and 94.12 % lesser energy consumption comparing to the existing EA-SND-Hyb-MG-CUF, EA-SND-PVEH and EA-SND-PIA techniques respectively.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101048"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.suscom.2024.101050
Vishakha Saurabh Shah, M.S. Ali, Saurabh A. Shah
Power quality is one of the most important fields of energy study in the modern period (PQ). It is important to detect harmonics in the energy as well as any sharp voltage changes. When there are significant or rapid changes in the electrical load, i.e. load variations, it can lead to several issues affecting power quality, including voltage fluctuations, harmonic distortion, frequency variations, and transient disturbances. Estimating load variation is a difficult task. The main aim of this work is to design and develop an Improved Lion Optimization algorithm to tune the CNN classifier. It involves the estimation of the type of load variation. Initially, the time series features are taken from the input data in such a way to find the type of load with enhanced accuracy. To estimate load variation, a Convolutional Neural Network (CNN) is used, and its weights are optimally modified using the Improved Lion Algorithm, a proposed optimization algorithm (ILA). The proposed method was simulated in MATLAB and the result of the ILA-CNN method is generated based on error analysis based on the indices, such as MSRE, RMSE, MAPE, RMSRE, MARE, MAE, RMSPE, and MSE. The proposed work examines load variations ranging from 40×106to 130×106while considering different learning rates of 50 %, 60 %, and 70 %. The result demonstrates that at learning percentage 50, the MAE of the proposed ILA-CNN method is 7.06 %, 62.98 %, 41.13 % and 54.63 % better than the CNN, DF+CNN, PSO+CNN and LA+CNN methods.
{"title":"An optimized deep learning model for estimating load variation type in power quality disturbances","authors":"Vishakha Saurabh Shah, M.S. Ali, Saurabh A. Shah","doi":"10.1016/j.suscom.2024.101050","DOIUrl":"10.1016/j.suscom.2024.101050","url":null,"abstract":"<div><div>Power quality is one of the most important fields of energy study in the modern period (PQ). It is important to detect harmonics in the energy as well as any sharp voltage changes. When there are significant or rapid changes in the electrical load, i.e. load variations, it can lead to several issues affecting power quality, including voltage fluctuations, harmonic distortion, frequency variations, and transient disturbances. Estimating load variation is a difficult task. The main aim of this work is to design and develop an Improved Lion Optimization algorithm to tune the CNN classifier. It involves the estimation of the type of load variation. Initially, the time series features are taken from the input data in such a way to find the type of load with enhanced accuracy. To estimate load variation, a Convolutional Neural Network (CNN) is used, and its weights are optimally modified using the Improved Lion Algorithm, a proposed optimization algorithm (ILA). The proposed method was simulated in MATLAB and the result of the ILA-CNN method is generated based on error analysis based on the indices, such as MSRE, RMSE, MAPE, RMSRE, MARE, MAE, RMSPE, and MSE. The proposed work examines load variations ranging from 40×10<sup>6</sup><span><math><mi>Ω</mi></math></span>to 130×10<sup>6</sup><span><math><mi>Ω</mi></math></span>while considering different learning rates of 50 %, 60 %, and 70 %. The result demonstrates that at learning percentage 50, the MAE of the proposed ILA-CNN method is 7.06 %, 62.98 %, 41.13 % and 54.63 % better than the CNN, DF+CNN, PSO+CNN and LA+CNN methods.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101050"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142661866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory wall is known as one of the most critical bottlenecks in processors, rooted in the long memory access delay. With the advent of emerging memory-intensive applications such as image processing, the memory wall problem has become even more critical. Near data processing (NDP) has been introduced as an astonishing solution where instead of moving data from the main memory, instructions are offloaded to the cores integrated with the main memory level. However, in NDP, instructions that are to be offloaded, are statically selected at the compilation time prior to run-time. In addition, NDP ignores the benefit of offloading instructions into the intermediate memory hierarchy levels. We propose Nearest Data Processing (NSDP) which introduces a hierarchical processing approach in GPU. In NSDP, each memory hierarchy level is equipped with processing cores capable of executing instructions. By analyzing the instruction status at run-time, NSDP dynamically decides whether an instruction should be offloaded to the next level of memory hierarchy or be processed at the current level. Depending on the decision, either data is moved upward to the processing core or the instruction is moved downward to the data storage unit. With this approach, the data movement rate has been reduced, on average, by 47 % over the baseline. Consequently, NSDP has been able to improve the system performance, on average, by 37 % and reduce the power consumption, on average, by 18 %.
{"title":"Nearest data processing in GPU","authors":"Hossein Bitalebi , Farshad Safaei , Masoumeh Ebrahimi","doi":"10.1016/j.suscom.2024.101047","DOIUrl":"10.1016/j.suscom.2024.101047","url":null,"abstract":"<div><div>Memory wall is known as one of the most critical bottlenecks in processors, rooted in the long memory access delay. With the advent of emerging memory-intensive applications such as image processing, the memory wall problem has become even more critical. Near data processing (NDP) has been introduced as an astonishing solution where instead of moving data from the main memory, instructions are offloaded to the cores integrated with the main memory level. However, in NDP, instructions that are to be offloaded, are statically selected at the compilation time prior to run-time. In addition, NDP ignores the benefit of offloading instructions into the intermediate memory hierarchy levels. We propose Nearest Data Processing (NSDP) which introduces a hierarchical processing approach in GPU. In NSDP, each memory hierarchy level is equipped with processing cores capable of executing instructions. By analyzing the instruction status at run-time, NSDP dynamically decides whether an instruction should be offloaded to the next level of memory hierarchy or be processed at the current level. Depending on the decision, either data is moved upward to the processing core or the instruction is moved downward to the data storage unit. With this approach, the data movement rate has been reduced, on average, by 47 % over the baseline. Consequently, NSDP has been able to improve the system performance, on average, by 37 % and reduce the power consumption, on average, by 18 %.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101047"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exceptional growth in the penetration of renewable sources as well as complex and variable operating conditions of load demand in power system may jeopardize its operation without an appropriate automatic generation control (AGC) methodology. Hence, an intelligent resilient fractional order fuzzy PID (FOFPID) controlled AGC system is presented in this study. The parameters of controller are tuned utilizing a modified moth swarm algorithm (mMSA) inspired by the movement of moth towards moon light. At first, the effectiveness of the controller is verified on a nonlinear 5-area thermal power system. The simulation outcomes bring out that the suggested controller provides the best performance over the lately published strategies. In the subsequent step, the methodology is extended to a 5-area system having 10-units of power generations, namely thermal, hydro, wind, diesel, gas turbine with 2-units in each area. It is observed that mMSA based FOFPID is more effective related to other approaches. In order to establish the robustness of the controller, a sensitivity examination is executed. Then, experiments are conducted on OPAL-RT based real-time simulation to confirm the feasibility of the method. Finally, mMSA based FOFPID controller is observed superior than the recently published approaches for standard 2-area thermal and IEEE 10 generator 39 bus systems.
{"title":"A mMSA-FOFPID controller for AGC of multi-area power system with multi-type generations","authors":"Dillip Khamari , Rabindra Kumar Sahu , Sidhartha Panda , Yogendra Arya","doi":"10.1016/j.suscom.2024.101046","DOIUrl":"10.1016/j.suscom.2024.101046","url":null,"abstract":"<div><div>The exceptional growth in the penetration of renewable sources as well as complex and variable operating conditions of load demand in power system may jeopardize its operation without an appropriate automatic generation control (AGC) methodology. Hence, an intelligent resilient fractional order fuzzy PID (FOFPID) controlled AGC system is presented in this study. The parameters of controller are tuned utilizing a modified moth swarm algorithm (mMSA) inspired by the movement of moth towards moon light. At first, the effectiveness of the controller is verified on a nonlinear 5-area thermal power system. The simulation outcomes bring out that the suggested controller provides the best performance over the lately published strategies. In the subsequent step, the methodology is extended to a 5-area system having 10-units of power generations, namely thermal, hydro, wind, diesel, gas turbine with 2-units in each area. It is observed that mMSA based FOFPID is more effective related to other approaches. In order to establish the robustness of the controller, a sensitivity examination is executed. Then, experiments are conducted on OPAL-RT based real-time simulation to confirm the feasibility of the method. Finally, mMSA based FOFPID controller is observed superior than the recently published approaches for standard 2-area thermal and IEEE 10 generator 39 bus systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101046"},"PeriodicalIF":3.8,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.suscom.2024.101045
Mohamed Abdel-Basset , Reda Mohamed , Doaa El-Shahat , Karam M. Sallam , Ibrahim M. Hezam , Nabil M. AbdelAziz
The mobile edge computing system supported by multiple unmanned aerial vehicles (UAVs) has gained significant interest over the last few decades due to its flexibility and ability to enhance communication coverage. In this system, the UAVs function as edge servers to offer computing services to Internet of Things devices (IoTDs), and if they are located distant from those devices, a significant amount of energy is consumed while data is transmitted. Therefore, optimizing UAVs’ trajectories is an indispensable process to minimize overall energy consumption in this system. This problem is difficult to solve because it requires multiple considerations, including the number and placement of stop points (SPs), their order, and the association between SPs and UAVs. A few studies in the literature have been presented to address all of these aspects; nevertheless, the majority of them suffer from slow convergence speed, stagnation in local optima, and expensive computational costs. Therefore, this study presents a new trajectory optimization algorithm, namely ITPA-GBOKM, based on a newly proposed transfer-based encoding mechanism, gradient-based optimizer, and K-Medoids Clustering algorithm to tackle this problem more accurately. The K-medoid clustering algorithm is used to achieve better association between UAVs and SPs since it is less sensitive to outliers than the K-means clustering algorithm; the transfer function-based encoding mechanism is used to efficiently define this problem’s solutions and manage the number of SPs; and GBO is utilized to search for the best SPs that could minimize overall energy consumption, including that consumed by UAVs and IoTDs. The proposed ITPA-GBOKM is evaluated using 13 instances with several IoTDs ranging from 60 to 700 to show its effectiveness in dealing with the trajectory optimization problem at small, medium, and large scales. Furthermore, it is compared to several rival optimizers using a variety of performance metrics, including average fitness, multiple comparison test, Wilcoxon rank sum test, standard deviation, Friedman mean rank, and convergence speed, to show its superiority. The experimental results indicates that this algorithm is capable of producing significantly different and superior results compared to the rival algorithms.
{"title":"Energy-efficient trajectory optimization algorithm based on K-medoids clustering and gradient-based optimizer for multi-UAV-assisted mobile edge computing systems","authors":"Mohamed Abdel-Basset , Reda Mohamed , Doaa El-Shahat , Karam M. Sallam , Ibrahim M. Hezam , Nabil M. AbdelAziz","doi":"10.1016/j.suscom.2024.101045","DOIUrl":"10.1016/j.suscom.2024.101045","url":null,"abstract":"<div><div>The mobile edge computing system supported by multiple unmanned aerial vehicles (UAVs) has gained significant interest over the last few decades due to its flexibility and ability to enhance communication coverage. In this system, the UAVs function as edge servers to offer computing services to Internet of Things devices (IoTDs), and if they are located distant from those devices, a significant amount of energy is consumed while data is transmitted. Therefore, optimizing UAVs’ trajectories is an indispensable process to minimize overall energy consumption in this system. This problem is difficult to solve because it requires multiple considerations, including the number and placement of stop points (SPs), their order, and the association between SPs and UAVs. A few studies in the literature have been presented to address all of these aspects; nevertheless, the majority of them suffer from slow convergence speed, stagnation in local optima, and expensive computational costs. Therefore, this study presents a new trajectory optimization algorithm, namely ITPA-GBOKM, based on a newly proposed transfer-based encoding mechanism, gradient-based optimizer, and K-Medoids Clustering algorithm to tackle this problem more accurately. The K-medoid clustering algorithm is used to achieve better association between UAVs and SPs since it is less sensitive to outliers than the K-means clustering algorithm; the transfer function-based encoding mechanism is used to efficiently define this problem’s solutions and manage the number of SPs; and GBO is utilized to search for the best SPs that could minimize overall energy consumption, including that consumed by UAVs and IoTDs. The proposed ITPA-GBOKM is evaluated using 13 instances with several IoTDs ranging from 60 to 700 to show its effectiveness in dealing with the trajectory optimization problem at small, medium, and large scales. Furthermore, it is compared to several rival optimizers using a variety of performance metrics, including average fitness, multiple comparison test, Wilcoxon rank sum test, standard deviation, Friedman mean rank, and convergence speed, to show its superiority. The experimental results indicates that this algorithm is capable of producing significantly different and superior results compared to the rival algorithms.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101045"},"PeriodicalIF":3.8,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142531010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-20DOI: 10.1016/j.suscom.2024.101044
B. Swathi , Dr. M. Amanullah , S.A. Kalaiselvan
Fault tolerance is the network's capacity to continue operating normally in the event of sensor failure. Sensor nodes in wireless sensor networks (WSNs) may fail due to various reasons, such as energy depletion or environmental damage. Sensor battery drain is the leading cause of failure in WSNs, making energy-saving crucial to extending sensor lifespan. Fault-tolerant protocols use fault recovery methods to ensure network reliability and resilience. Many issues can affect a network, such as communication module breakdown, battery drain, or changes in network architecture. Our proposed FT-RR protocol is a WSN routing protocol that is both reliable and fault-tolerant; it attempts to prevent errors by anticipating them. FT-RR uses Bernoulli's rule to find trustworthy nodes and then uses those pathways to route data to the base station as efficiently as possible. When CHs have greater energy, they construct these pathways. Based on the simulation findings, our approach outperforms the other protocols concerning the rate of loss of packet, end-to-end latency, and network lifespan.
{"title":"Energy-efficient and fault-tolerant routing mechanism for WSN using optimizer based deep learning model","authors":"B. Swathi , Dr. M. Amanullah , S.A. Kalaiselvan","doi":"10.1016/j.suscom.2024.101044","DOIUrl":"10.1016/j.suscom.2024.101044","url":null,"abstract":"<div><div>Fault tolerance is the network's capacity to continue operating normally in the event of sensor failure. Sensor nodes in wireless sensor networks (WSNs) may fail due to various reasons, such as energy depletion or environmental damage. Sensor battery drain is the leading cause of failure in WSNs, making energy-saving crucial to extending sensor lifespan. Fault-tolerant protocols use fault recovery methods to ensure network reliability and resilience. Many issues can affect a network, such as communication module breakdown, battery drain, or changes in network architecture. Our proposed FT-RR protocol is a WSN routing protocol that is both reliable and fault-tolerant; it attempts to prevent errors by anticipating them. FT-RR uses Bernoulli's rule to find trustworthy nodes and then uses those pathways to route data to the base station as efficiently as possible. When CHs have greater energy, they construct these pathways. Based on the simulation findings, our approach outperforms the other protocols concerning the rate of loss of packet, end-to-end latency, and network lifespan.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101044"},"PeriodicalIF":3.8,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.suscom.2024.101043
R.M. Bhavadharini , Suseela Sellamuthu , G. Sudhakaran , Ahmed A. Elngar
Due to the resource constraint nature of WSN, ensuring secured communication in WSN is a challenging problem. Moreover, enhancing the network lifetime is one of the major issues faced by the existing studies. So, in order to secure the communication between WSNs and achieve improved network lifetime, a novel trust enabled routing protocol is proposed in this study. Initially, the clusters are constructed using Direct, Indirect, and Total Trust evaluations, which helps to identify the faulty nodes. After, an Improved Fuzzy-based Balanced Cost Cluster Head Selection (IFBECS) method is used to choose the cluster head (CH). Finally, to determine the best path from source to destination, a hybrid bionic energy-efficient routing model known as an Energy Efficient Rider Remora Routing (EERRR) protocol is introduced. To improve the network lifetime and throughput, the parameters like remaining energy of the CH, sensor node space, CH, etc., are considered by the utilized protocol. The proposed mechanism is implemented in NS-2 programming tool. The simulation results show that the proposed routing protocol has attained improved PDR of 97.92 % at the time period of 50 ms, reduced energy consumption of 3.336 at the time period of 100 ms, higher throughput of 86262.7 at the time period of 250 ms, and enhanced network lifetime of 1028.08 rounds in 200 nodes. Therefore, by attaining better results as compared with other existing protocols, it is clearly revealed that the proposed routing protocol is highly suitable for secured energy efficient WSN communication.
{"title":"Fuzzy based Energy Efficient Rider Remora Routing protocol for secured communication in WSN network","authors":"R.M. Bhavadharini , Suseela Sellamuthu , G. Sudhakaran , Ahmed A. Elngar","doi":"10.1016/j.suscom.2024.101043","DOIUrl":"10.1016/j.suscom.2024.101043","url":null,"abstract":"<div><div>Due to the resource constraint nature of WSN, ensuring secured communication in WSN is a challenging problem. Moreover, enhancing the network lifetime is one of the major issues faced by the existing studies. So, in order to secure the communication between WSNs and achieve improved network lifetime, a novel trust enabled routing protocol is proposed in this study. Initially, the clusters are constructed using Direct, Indirect, and Total Trust evaluations, which helps to identify the faulty nodes. After, an Improved Fuzzy-based Balanced Cost Cluster Head Selection (IFBECS) method is used to choose the cluster head (CH). Finally, to determine the best path from source to destination, a hybrid bionic energy-efficient routing model known as an Energy Efficient Rider Remora Routing (EERRR) protocol is introduced. To improve the network lifetime and throughput, the parameters like remaining energy of the CH, sensor node space, CH, etc., are considered by the utilized protocol. The proposed mechanism is implemented in NS-2 programming tool. The simulation results show that the proposed routing protocol has attained improved PDR of 97.92 % at the time period of 50 ms, reduced energy consumption of 3.336 at the time period of 100 ms, higher throughput of 86262.7 at the time period of 250 ms, and enhanced network lifetime of 1028.08 rounds in 200 nodes. Therefore, by attaining better results as compared with other existing protocols, it is clearly revealed that the proposed routing protocol is highly suitable for secured energy efficient WSN communication.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101043"},"PeriodicalIF":3.8,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1016/j.suscom.2024.101042
Dinesh Kumar Jayaraman Rajanediran , C. Ganesh Babu , K. Priyadharsini
Acceleration techniques play a crucial role in enhancing the performance of modern high-speed computations, especially in Deep Learning (DL) applications where the speed is of utmost importance. One essential component in this context is the Systolic Array (SA), which effectively handles computational tasks and data processing in a rhythmic manner. Google's Tensor Processing Unit (TPU) leverages the power of SA for neural networks. The core SA's functionality and performance lies in the Computation Element (CE), which facilitates parallel data flow. In our article, we introduce a novel approach called Proposed Systolic Array (PSA), which is implemented on the CE and further enhanced with a modified Hybrid Kogge Stone adder (MHA). This design incorporates principles to expedite computations by rounding and extracting data model in SA as PSA-MHA. The PSA, utilizing a data flow model with MHA, significantly accelerates data shifts and control passes in execution cycles. We validated our approach through simulations on the Cadence Virtuoso platform using 65 nm process technology, comparing it to the General Matrix Multiplication (GMMN) benchmark. The results showed remarkable improvements in the CE, with a 30.29 % reduction in delay, a 23.07 % reduction in area, and an 11.87 % reduction in power consumption. The PSA outperformed these improvements, achieving a 46.38 % reduction in delay, a 7.58 % reduction in area, and an impressive 48.23 % decrease in Area Delay Product (ADP). To further substantiate our findings, we applied the PSA-based approach to pre-trained hybrid Convolutional and Recurrent (CNN-RNN) neural models. The PSA-based hybrid model incorporates 189 million Multiply-Accumulate (MAC) units, resulting in a weighted mean architecture value of 784.80 for the RNN component. We also explored variations in bit width, which led to delay reductions ranging from 20.17 % to 30.29 %, area variations between 13.08 % and 32.16 %, and power consumption changes spanning from 11.88 % to 20.42 %.
{"title":"A certain examination on heterogeneous systolic array (HSA) design for deep learning accelerations with low power computations","authors":"Dinesh Kumar Jayaraman Rajanediran , C. Ganesh Babu , K. Priyadharsini","doi":"10.1016/j.suscom.2024.101042","DOIUrl":"10.1016/j.suscom.2024.101042","url":null,"abstract":"<div><div>Acceleration techniques play a crucial role in enhancing the performance of modern high-speed computations, especially in Deep Learning (DL) applications where the speed is of utmost importance. One essential component in this context is the Systolic Array (SA), which effectively handles computational tasks and data processing in a rhythmic manner. Google's Tensor Processing Unit (TPU) leverages the power of SA for neural networks. The core SA's functionality and performance lies in the Computation Element (CE), which facilitates parallel data flow. In our article, we introduce a novel approach called Proposed Systolic Array (PSA), which is implemented on the CE and further enhanced with a modified Hybrid Kogge Stone adder (MHA). This design incorporates principles to expedite computations by rounding and extracting data model in SA as PSA-MHA. The PSA, utilizing a data flow model with MHA, significantly accelerates data shifts and control passes in execution cycles. We validated our approach through simulations on the Cadence Virtuoso platform using 65 nm process technology, comparing it to the General Matrix Multiplication (GMMN) benchmark. The results showed remarkable improvements in the CE, with a 30.29 % reduction in delay, a 23.07 % reduction in area, and an 11.87 % reduction in power consumption. The PSA outperformed these improvements, achieving a 46.38 % reduction in delay, a 7.58 % reduction in area, and an impressive 48.23 % decrease in Area Delay Product (ADP). To further substantiate our findings, we applied the PSA-based approach to pre-trained hybrid Convolutional and Recurrent (CNN-RNN) neural models. The PSA-based hybrid model incorporates 189 million Multiply-Accumulate (MAC) units, resulting in a weighted mean architecture value of 784.80 for the RNN component. We also explored variations in bit width, which led to delay reductions ranging from 20.17 % to 30.29 %, area variations between 13.08 % and 32.16 %, and power consumption changes spanning from 11.88 % to 20.42 %.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101042"},"PeriodicalIF":3.8,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142438052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1016/j.suscom.2024.101041
Rahul Gupta , Aseem Chandel
The rapid expansion of solar power generation has led to new challenges in solar intermittency, requiring precise forecasts of Global Horizontal Irradiance (GHI). Accurate GHI predictions are crucial for integrating sustainable energy sources into traditional electrical grid management. The article proposes an innovative solution, the novel Enhanced Stack Ensemble with a Bi-directional Gated Recurrent Unit (ESE-Bi-GRU), which uses machine learning (ML) boosting regressors such as Ada Boost, Cat Boost, Extreme Gradient Boost, and Gradient Boost, and Light Gradient Boost Machine acts as a base learner and the deep learning (DL) algorithms such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) for both directions are taken as a meta-learner. The predictive performance of the proposed ESE-Bi-GRU model is evaluated against individual models, showing significant reductions in mean absolute error (MAE) by 86.03 % and root mean squared error (RMSE) by 66.43 %. The model's ability to minimize prediction errors, such as MAE and RMSE holds promise for more effective planning and utilization of sporadic solar resources. By improving GHI forecast accuracy, the ESE-Bi-GRU model contributes to optimizing the integration of sustainable energy sources within the broader energy grid, fostering a more sustainable and environmentally conscious approach to energy management.
{"title":"A bidirectional gated recurrent unit based novel stacking ensemble regressor for foretelling the global horizontal irradiance","authors":"Rahul Gupta , Aseem Chandel","doi":"10.1016/j.suscom.2024.101041","DOIUrl":"10.1016/j.suscom.2024.101041","url":null,"abstract":"<div><div>The rapid expansion of solar power generation has led to new challenges in solar intermittency, requiring precise forecasts of Global Horizontal Irradiance (GHI). Accurate GHI predictions are crucial for integrating sustainable energy sources into traditional electrical grid management. The article proposes an innovative solution, the novel Enhanced Stack Ensemble with a Bi-directional Gated Recurrent Unit (ESE-Bi-GRU), which uses machine learning (ML) boosting regressors such as Ada Boost, Cat Boost, Extreme Gradient Boost, and Gradient Boost, and Light Gradient Boost Machine acts as a base learner and the deep learning (DL) algorithms such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) for both directions are taken as a meta-learner. The predictive performance of the proposed ESE-Bi-GRU model is evaluated against individual models, showing significant reductions in mean absolute error (MAE) by 86.03 % and root mean squared error (RMSE) by 66.43 %. The model's ability to minimize prediction errors, such as MAE and RMSE holds promise for more effective planning and utilization of sporadic solar resources. By improving GHI forecast accuracy, the ESE-Bi-GRU model contributes to optimizing the integration of sustainable energy sources within the broader energy grid, fostering a more sustainable and environmentally conscious approach to energy management.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101041"},"PeriodicalIF":3.8,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142432143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}