Pub Date : 2023-11-10DOI: 10.1016/j.suscom.2023.100926
Dong Dong , Hongxu Jiang , Xuekai Wei , Yanfei Song , Xu Zhuang , Jason Wang
Neural Architecture Search (NAS) is crucial in the field of sustainable computing as it facilitates the development of highly efficient and effective neural networks. However, it cannot automate the deployment of neural networks to accommodate specific hardware resources and task requirements. This paper introduces ETNAS, which is a hardware-aware multi-objective optimal neural network architecture search algorithm based on the differentiable neural network architecture search method (DARTS). The algorithm searches for a lower-power neural network architecture with guaranteed inference accuracy by modifying the loss function of the differentiable neural network architecture search. We modify the dense network in DARTS to simultaneously search for networks with a lower memory footprint, enabling them to run on memory-constrained edge-end devices. We collected data on the power consumption and time consumption of numerous common operators on FPGA and Domain-Specific Architectures (DSA). The experimental results demonstrate that ETNAS achieves comparable accuracy performance and time efficiency while consuming less power compared to state-of-the-art algorithms, thereby validating its effectiveness in practical applications and contributing to the reduction of carbon emissions in intelligent cyber–physical systems (ICPS) edge computing inference.
{"title":"ETNAS: An energy consumption task-driven neural architecture search","authors":"Dong Dong , Hongxu Jiang , Xuekai Wei , Yanfei Song , Xu Zhuang , Jason Wang","doi":"10.1016/j.suscom.2023.100926","DOIUrl":"10.1016/j.suscom.2023.100926","url":null,"abstract":"<div><p><span>Neural Architecture Search (NAS) is crucial in the field of sustainable computing as it facilitates the development of highly efficient and effective neural networks. However, it cannot automate the deployment of neural networks to accommodate specific hardware resources and task requirements. This paper introduces ETNAS, which is a hardware-aware multi-objective optimal </span>neural network architecture<span><span><span> search algorithm based on the differentiable neural network architecture search method (DARTS). The algorithm searches for a lower-power neural network architecture with guaranteed inference accuracy by modifying the loss function of the differentiable neural network architecture search. We modify the dense network in DARTS to simultaneously search for networks with a lower memory footprint, enabling them to run on memory-constrained edge-end devices. We collected data on the </span>power consumption and time consumption of numerous common operators on </span>FPGA<span> and Domain-Specific Architectures (DSA). The experimental results demonstrate that ETNAS achieves comparable accuracy performance and time efficiency while consuming less power compared to state-of-the-art algorithms, thereby validating its effectiveness in practical applications and contributing to the reduction of carbon emissions in intelligent cyber–physical systems (ICPS) edge computing inference.</span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100926"},"PeriodicalIF":4.5,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135565444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1016/j.suscom.2023.100927
Wei-Chun Chen , Ming-Jay Deng , Ping-Yu Liu , Chun-Chi Lai , Yu-Hao Lin
To better control traffic and promote environmental sustainability, this study proposed a framework to monitor vehicle number and velocity at real time. First, You Only Look Once-v4 (Yolo-v4) algorithm based on deep learning can greatly improve the accuracy of object detection in an image, and trackers, including Sort and Deepsort, resolved the identity switch problem to track efficiently the multiple objects. To that end, this study combined Yolo-v4 with Sort and Deepsort to develop two trajectory models, which are known as YS and YDS, respectively. In addition, different regions of interest (ROI) with different pixel distances (PDs), named ROI-10 and ROI-14, were converted by road marking to calibrate the PD. Finally, a high-resolution benchmark video and two real-time low-resolution videos of highway both were employed to validate this proposed framework. Results show the YDS with ROI-10 achieved 90% accuracy of vehicle counting, when compared to the number of actual vehicles, and this outperformed the YS with ROI-10. However, the YDS with ROI-14 generated relatively good estimates of vehicle velocity. As shown in the real-time low-resolution videos, the YDS with ROI-10 achieved 89.5% and 83.7% accuracy of vehicle counting in Nantun and Daya sites of highway, respectively, and reasonable estimates of vehicle velocity were obtained. In the future, more bus and light truck images could be collected to effectively train the Yolo-v4 and improve the detection of bus and light truck. A better mechanism for precise vehicle velocity estimation and the vehicle detection in different environment conditions should be further investigated.
为了更好地控制交通和促进环境的可持续性,本研究提出了一个实时监测车辆数量和速度的框架。首先,基于深度学习的You Only Look Once-v4 (Yolo-v4)算法可以大大提高图像中目标检测的准确性,包括Sort和Deepsort在内的跟踪器解决了身份切换问题,可以高效地跟踪多个目标。为此,本研究将Yolo-v4与Sort和Deepsort相结合,开发了两种轨迹模型,分别称为YS和YDS。此外,通过道路标记转换具有不同像素距离的不同感兴趣区域(ROI),命名为ROI-10和ROI-14,以校准PD。最后,利用一个高分辨率基准视频和两个公路低分辨率实时视频对该框架进行了验证。结果表明,与实际车辆数量相比,具有ROI-10的YDS实现了90%的车辆计数准确率,这优于具有ROI-10的YS。然而,具有ROI-14的YDS生成了相对较好的车辆速度估计。从实时低分辨率视频中可以看出,ROI-10的YDS在高速公路南屯和大雅站点的车辆计数准确率分别达到89.5%和83.7%,并得到了合理的车速估计。未来可以收集更多的客车和轻卡图像,有效训练Yolo-v4,提高对客车和轻卡的检测能力。在不同的环境条件下,需要进一步研究更精确的车辆速度估计和车辆检测机制。
{"title":"A framework for real-time vehicle counting and velocity estimation using deep learning","authors":"Wei-Chun Chen , Ming-Jay Deng , Ping-Yu Liu , Chun-Chi Lai , Yu-Hao Lin","doi":"10.1016/j.suscom.2023.100927","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100927","url":null,"abstract":"<div><p>To better control traffic and promote environmental sustainability, this study proposed a framework to monitor vehicle number and velocity at real time. First, You Only Look Once-v4 (Yolo-v4) algorithm based on deep learning can greatly improve the accuracy of object detection in an image, and trackers, including Sort and Deepsort, resolved the identity switch problem to track efficiently the multiple objects. To that end, this study combined Yolo-v4 with Sort and Deepsort to develop two trajectory models, which are known as YS and YDS, respectively. In addition, different regions of interest (ROI) with different pixel distances (PDs), named ROI-10 and ROI-14, were converted by road marking to calibrate the PD. Finally, a high-resolution benchmark video and two real-time low-resolution videos of highway both were employed to validate this proposed framework. Results show the YDS with ROI-10 achieved 90% accuracy of vehicle counting, when compared to the number of actual vehicles, and this outperformed the YS with ROI-10. However, the YDS with ROI-14 generated relatively good estimates of vehicle velocity. As shown in the real-time low-resolution videos, the YDS with ROI-10 achieved 89.5% and 83.7% accuracy of vehicle counting in Nantun and Daya sites of highway, respectively, and reasonable estimates of vehicle velocity were obtained. In the future, more bus and light truck images could be collected to effectively train the Yolo-v4 and improve the detection of bus and light truck. A better mechanism for precise vehicle velocity estimation and the vehicle detection in different environment conditions should be further investigated.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100927"},"PeriodicalIF":4.5,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92024808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-23DOI: 10.1016/j.suscom.2023.100925
Ligang Tang , Tong Kong , Nisreen Innab
Power systems' efficient management and planning are crucial in renewable energy-based systems. As the global electricity demand continues to rise, there is a growing need for alternative energy sources such as solar, wind, and hydropower. Consequently, numerous research studies have focused on maintaining load balancing within the renewable energy system and improving the forecasting of renewable energy resources. This paper presents the Eagle Arithmetic Optimization Algorithm (EAOA) as a novel approach to address these challenges. By utilizing a fuzzy-based dragonfly optimization algorithm (fuzzy-DFOA), the proposed method enhances the accuracy of load-balancing analysis in renewable energy resources. Through its innovative techniques, the EAOA demonstrates its potential to significantly improve the efficiency and effectiveness of managing renewable energy systems, paving the way for a more sustainable and reliable power grid. The accuracy rate of both wind and solar datasets is given. For the wind dataset, our proposed work got 92.63%, SVR got 75.89%, CNN got 87.54%, and QODA got 83.16%. For the solar dataset presented work of fuzzy-based DFOA got 92.59%, SVR got 69.16%, CNN got 86.25%, and QODA got 82.37%.
{"title":"Eagle arithmetic optimization algorithm for renewable energy-based load frequency stabilization of power systems","authors":"Ligang Tang , Tong Kong , Nisreen Innab","doi":"10.1016/j.suscom.2023.100925","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100925","url":null,"abstract":"<div><p><span>Power systems' efficient management and planning are crucial in renewable energy-based systems. As the global electricity demand continues to rise, there is a growing need for alternative energy sources<span> such as solar, wind, and hydropower. Consequently, numerous research studies have focused on maintaining load balancing within the renewable energy system<span> and improving the forecasting of renewable energy resources. This paper presents the Eagle Arithmetic </span></span></span>Optimization Algorithm<span> (EAOA) as a novel approach to address these challenges. By utilizing a fuzzy-based dragonfly optimization algorithm (fuzzy-DFOA), the proposed method enhances the accuracy of load-balancing analysis in renewable energy resources. Through its innovative techniques, the EAOA demonstrates its potential to significantly improve the efficiency and effectiveness of managing renewable energy systems, paving the way for a more sustainable and reliable power grid. The accuracy rate of both wind and solar datasets is given. For the wind dataset, our proposed work got 92.63%, SVR got 75.89%, CNN got 87.54%, and QODA got 83.16%. For the solar dataset presented work of fuzzy-based DFOA got 92.59%, SVR got 69.16%, CNN got 86.25%, and QODA got 82.37%.</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100925"},"PeriodicalIF":4.5,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92024810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1016/j.suscom.2023.100923
Jun Luo , Gang Wang , Mingliang Zhou , Huayan Pu , Jun Luo
The strip running deviation in steel production can cause significant economic losses by forcing a shutdown of the whole steel production line. However, due to the fast running speed (100–140 m/min) of the strip, it a difficult problem to accurately judge online whether the strip running deviation or not and control its deviation during operation. In this paper, a fast and accurate model for detecting strip running deviation is proposed, this model allows for real-time control of strip operation deviation according to the detection model’s results. In our model, the attention module is used to improve the detection accuracy. The rolling equipment’s pressing force can be real-time controlled to correct the strip running deviation. Compared with the original model, the proposed model in this paper achieves an increase in accuracy of 3 %, and the detection speed can reach 29 FPS, meeting the real-time requirements. This work can provide ideas for applying computer vision in construction of intelligent factories.
{"title":"Strip running deviation monitoring and feedback real-time in smart factories based on improved YOLOv5","authors":"Jun Luo , Gang Wang , Mingliang Zhou , Huayan Pu , Jun Luo","doi":"10.1016/j.suscom.2023.100923","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100923","url":null,"abstract":"<div><p>The strip running deviation in steel production can cause significant economic losses by forcing a shutdown of the whole steel production line. However, due to the fast running speed (100–140 m/min) of the strip, it a difficult problem to accurately judge online whether the strip running deviation or not and control its deviation during operation. In this paper, a fast and accurate model for detecting strip running deviation is proposed, this model allows for real-time control of strip operation deviation according to the detection model’s results. In our model, the attention module is used to improve the detection accuracy. The rolling equipment’s pressing force can be real-time controlled to correct the strip running deviation. Compared with the original model, the proposed model in this paper achieves an increase in accuracy of 3 %, and the detection speed can reach 29 FPS, meeting the real-time requirements. This work can provide ideas for applying computer vision in construction of intelligent factories.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100923"},"PeriodicalIF":4.5,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92024809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growth and development of rice crops primarily depend on appropriate soil water balance for which soil moisture is the key determinant. Soil moisture is a crucial parameter in the hydrological cycle, which has a vital role in optimal water management for sustainable agricultural growth as it has a significant impact on hydrological, ecological, and climatic processes. Thus, accurate estimation of soil moisture is important otherwise it will drastically reduce crop yields, intensifying the global food crisis. A novel soil moisture prediction model (SVM-COLGWO) that incorporates the Grey Wolf Optimizer (GWO) into Chebyshev chaotic maps and opposition-based learning to optimize the Support Vector Machine (SVM) model is proposed. The suggested model simultaneously increases the simulated model’s accuracy while speeding up global convergence. To evaluate the proposed model, the prediction performance is compared with other hybrid and standalone models where the feasibility of the proposed model is validated through superior simulation results (MAE 0.167, MSE 0.179, RMSE 0.423, MAPE 0.162, and 0.949) including Shannon’s Entropy. Thus, based on accurate soil moisture simulation through the proposed model, irrigation can be effectively scheduled for sustainable rice growth.
{"title":"Soil moisture simulation of rice using optimized Support Vector Machine for sustainable agricultural applications","authors":"Parijata Majumdar , Sanjoy Mitra , Diptendu Bhattacharya","doi":"10.1016/j.suscom.2023.100924","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100924","url":null,"abstract":"<div><p><span>The growth and development of rice crops primarily depend on appropriate soil water balance for which soil moisture is the key determinant. Soil moisture is a crucial parameter in the hydrological cycle, which has a vital role in optimal water management for sustainable agricultural growth as it has a significant impact on hydrological, ecological, and climatic processes. Thus, accurate estimation of soil moisture is important otherwise it will drastically reduce crop yields, intensifying the global food crisis. A novel soil moisture prediction model (SVM-COLGWO) that incorporates the Grey Wolf Optimizer (GWO) into Chebyshev chaotic maps and opposition-based learning to optimize the Support Vector Machine (SVM) model is proposed. The suggested model simultaneously increases the simulated model’s accuracy while speeding up global convergence. To evaluate the proposed model, the prediction performance is compared with other hybrid and standalone models where the feasibility of the proposed model is validated through superior simulation results (MAE </span><span><math><mo>=</mo></math></span><span> 0.167, MSE </span><span><math><mo>=</mo></math></span><span> 0.179, RMSE </span><span><math><mo>=</mo></math></span> 0.423, MAPE <span><math><mo>=</mo></math></span> 0.162, and <span><math><mrow><msup><mrow><mi>R</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>=</mo></mrow></math></span> 0.949) including Shannon’s Entropy. Thus, based on accurate soil moisture simulation through the proposed model, irrigation can be effectively scheduled for sustainable rice growth.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100924"},"PeriodicalIF":4.5,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49827070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1016/j.suscom.2023.100922
B. Booba , X. Joshphin Jasaline Anitha , C. Mohan , Jeyalaksshmi S
In this manuscript, a Combined Approach of Generalized Backtracking Regularized Adaptive Matching Pursuit Algorithm and Adaptive β-Hill Climbing Algorithm for Virtual Machine Allocation in Cloud Computing (BA-VMA-CC) is proposed. Generalized Backtracking Regularized Adaptive Matching Pursuit Algorithm (GBRAMP) is used for Virtual Machine (VM) Migration process and Adaptive β-Hill Climbing Algorithm is used to Virtual Machine Placement. These two tasks are essential elements of VM allocation. GBRAMP is used to minimize cost and energy for both cloud service providers and users with help of migration process and to save time and energy. Adaptive β-Hill Climbing Algorithm (AβHCA) is employed for maximizing efficiency, minimizing power consumption and resource wastage. By Combining both GBRAMPA-AβHCA VM is optimally allocated in PM with high efficiency by minimizing cost and energy consumptions. The proposed BA-VMA-CC is implemented in MATLAB platform. The performance of proposed method attains 23.84 %, 28.94 %, 33.94 % lower energy consumption, 28.94 %, 34.95 %, 25.36 % lower CPU utilization is analyzed with existing methods, such as sine cosine with ant lion optimization for VM allocation in Cloud Computing (SCA-ALO-VMA-CC), hybrid distinct multiple object whale optimization and multi-verse optimization for VM allocation in Cloud Computing (DMOWOA-MVO-VMA-CC) and Cuckoo search optimization algorithm and particle swarm optimization algorithm (CSO-PSO-VMA-CC) respectively.
本文提出了一种结合广义回溯正则化自适应匹配追踪算法和自适应β-爬坡算法的云计算虚拟机分配方法(BA-VMA-CC)。虚拟机迁移过程采用广义回溯正则化自适应匹配追踪算法(GBRAMP),虚拟机放置过程采用自适应β-爬坡算法。这两个任务是分配虚拟机的基本要素。使用GBRAMP可以帮助云服务提供商和用户在迁移过程中最大限度地降低成本和能源,节省时间和能源。采用自适应β-爬坡算法(a - β hca)实现效率最大化、功耗最小化和资源浪费最小化。通过两者的结合,gbrampa - a - β使hca VM在PM中以最低的成本和能量消耗获得高效率的最佳分配。提出的BA-VMA-CC在MATLAB平台上实现。与现有的云计算虚拟机分配算法(SCA-ALO-VMA-CC)相比,所提方法的性能分别降低了23.84%、28.94%、33.94%、28.94%、34.95%、25.36%的CPU利用率。云计算中虚拟机分配的混合明显多目标鲸优化和多宇宙优化(DMOWOA-MVO-VMA-CC)和杜鹃搜索优化算法和粒子群优化算法(CSO-PSO-VMA-CC)。
{"title":"Hybrid approach for virtual machine allocation in cloud computing","authors":"B. Booba , X. Joshphin Jasaline Anitha , C. Mohan , Jeyalaksshmi S","doi":"10.1016/j.suscom.2023.100922","DOIUrl":"10.1016/j.suscom.2023.100922","url":null,"abstract":"<div><p><span><span><span>In this manuscript, a Combined Approach of Generalized Backtracking Regularized Adaptive Matching Pursuit Algorithm and Adaptive β-Hill Climbing Algorithm for Virtual Machine Allocation in </span>Cloud Computing<span> (BA-VMA-CC) is proposed. Generalized Backtracking Regularized Adaptive Matching Pursuit Algorithm (GBRAMP) is used for Virtual Machine (VM) Migration process and Adaptive β-Hill Climbing Algorithm is used to Virtual Machine Placement. These two tasks are essential elements of VM allocation. GBRAMP is used to minimize cost and energy for both cloud service providers and users with help of migration process and to save time and energy. Adaptive β-Hill Climbing Algorithm (AβHCA) is employed for maximizing efficiency, minimizing </span></span>power consumption<span><span><span> and resource wastage. By Combining both GBRAMPA-AβHCA VM is optimally allocated in PM with high efficiency by minimizing cost and energy consumptions. The proposed BA-VMA-CC is implemented in MATLAB platform. The performance of proposed method attains 23.84 %, 28.94 %, 33.94 % lower energy consumption, 28.94 %, 34.95 %, 25.36 % lower CPU utilization is analyzed with existing methods, such as sine cosine with ant lion optimization for VM allocation in Cloud Computing (SCA-ALO-VMA-CC), hybrid distinct multiple object whale optimization and multi-verse optimization for VM allocation in Cloud Computing (DMOWOA-MVO-VMA-CC) and </span>Cuckoo search </span>optimization algorithm and </span></span>particle swarm optimization algorithm (CSO-PSO-VMA-CC) respectively.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100922"},"PeriodicalIF":4.5,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135707818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1016/j.suscom.2023.100921
Jueru Huang , Dmitry D. Koroteev , Marina Rynkovskaya
The operation of the heating, ventilation, and air conditioning (HVAC) system is essential for the indoor thermal environment and is significant for energy consumers in commercial properties. Although earlier studies suggested that reinforcement learning controls could increase HVAC energy savings, they lacked sufficient details regarding end-to-end management. Recently, the focus on gathering and analyzing data from smart meters and buildings connected to energy-saving studies has increased. Deep reinforcement learning (DRL) suggests novel methods for operating HVAC systems and lowering energy usage. This paper evaluates energy consumption by Convolution Recurrent Neural Networks (CRNN), and Deep Reinforcement Learning is used. This is intended to forecast energy use under various climatic circumstances, and the processes are assessed under different communication protocols. The suggested control technique might directly accept quantitative elements, such as climate and indoor air quality conditions, as input and control indoor thermal set - points at a supervisory level by utilizing the deep neural network. In a highly effective office area in the Houston area, time series data, CRNN, and DRL are effectively used to uncover new energy-saving options (TX, USA). The article presents 1-year information from the Net Zero, Energy Star, and Leadership in Energy and Environment Design (LEED)-certified building, demonstrating a potential energy savings of 8% with the presented design. The findings demonstrate how useful the suggested strategy is in assisting building owners in locating new potential for energy conservation.
{"title":"Deep learning-based energy inefficiency detection in the smart buildings","authors":"Jueru Huang , Dmitry D. Koroteev , Marina Rynkovskaya","doi":"10.1016/j.suscom.2023.100921","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100921","url":null,"abstract":"<div><p><span><span>The operation of the heating, ventilation, and air conditioning (HVAC) system is essential for the indoor thermal environment<span> and is significant for energy consumers in commercial properties. Although earlier studies suggested that reinforcement learning<span> controls could increase HVAC energy savings, they lacked sufficient details regarding end-to-end management. Recently, the focus on gathering and analyzing data from smart meters and buildings connected to energy-saving studies has increased. </span></span></span>Deep reinforcement learning<span> (DRL) suggests novel methods for operating HVAC systems and lowering energy usage. This paper evaluates energy consumption by Convolution Recurrent Neural Networks (CRNN), and Deep Reinforcement Learning is used. This is intended to forecast energy use under various climatic circumstances, and the processes are assessed under different communication protocols. The suggested control technique might directly accept quantitative elements, such as climate and indoor air quality conditions, as input and control indoor thermal set - points at a supervisory level by utilizing the </span></span>deep neural network<span>. In a highly effective office area in the Houston area, time series data<span>, CRNN, and DRL are effectively used to uncover new energy-saving options (TX, USA). The article presents 1-year information from the Net Zero, Energy Star, and Leadership in Energy and Environment Design (LEED)-certified building, demonstrating a potential energy savings of 8% with the presented design. The findings demonstrate how useful the suggested strategy is in assisting building owners in locating new potential for energy conservation.</span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100921"},"PeriodicalIF":4.5,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49827067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.1016/j.suscom.2023.100920
Lian Tong, Lan Yang, Xin Zhao, Li Liu
The growth of Internet of Things (IoT) devices has prompted the growing use of software-defined networks (SDNs) in today's quickly changing technological environment. In SDN, execution and security of supporting applications and creating an adaptable network design allow the network to associate with applications legitimately. As a result, SDN promotes the growth of IoT-enabled devices, boosts network resource-sharing effectiveness, and boosts the reliability of IoT services. While these interconnected systems offer unprecedented convenience and efficiency, they also come with an increasing energy consumption challenge. The original features of these networks, such as the dynamic topology and energy constraints, challenge the routing issue in these networks. This article delves into the strategies and innovations that can effectively decrease energy consumption in IoT-based SDNs. The previous methods had some problems, such as increasing energy consumption, delay and network lifetime, etc. Thus, fuzzy and meta-heuristic methods have been used to maximize the search space and achieve optimum results. Due to the NP-hard nature of this issue, the Binary Quantum-Inspired Gravitational Search Algorithm (BQIGSA) is used in this paper to offer a fuzzy-based routing approach in IoT-based SDN, which aims to optimize energy, delay, and expected transmission rate. Fuzzy modeling, and particularly fuzzy routing algorithms, are explained in this study in relation to the decision-making component. The synergy of Fuzzy Logic and BQIGSA offers a promising avenue for enhancing IoT-based SDNs. This innovative approach tackles the challenges of uncertainty, energy optimization, and adaptive decision-making that are inherent in IoT networks. The simulation is performed through MATLAB. The outcomes of simulations and tests demonstrated that the suggested approach performed better than the current methods in terms of energy usage, delay rate, and data delivery rate.
{"title":"How can a hybrid quantum-inspired gravitational search algorithm decrease energy consumption in IoT-based software-defined networks?","authors":"Lian Tong, Lan Yang, Xin Zhao, Li Liu","doi":"10.1016/j.suscom.2023.100920","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100920","url":null,"abstract":"<div><p>The growth of Internet of Things<span><span> (IoT) devices has prompted the growing use of software-defined networks (SDNs) in today's quickly changing technological environment. In SDN, execution and security of supporting applications and creating an adaptable network design allow the network to associate with applications legitimately. As a result, SDN promotes the growth of IoT-enabled devices, boosts network resource-sharing effectiveness, and boosts the reliability of IoT services. While these interconnected systems offer unprecedented convenience and efficiency, they also come with an increasing energy consumption challenge. The original features of these networks, such as the dynamic topology<span><span> and energy constraints, challenge the routing issue in these networks. This article delves into the strategies and innovations that can effectively decrease energy consumption in IoT-based SDNs. The previous methods had some problems, such as increasing energy consumption, delay and network lifetime, etc. Thus, fuzzy and meta-heuristic methods have been used to maximize the search space and achieve optimum results. Due to the NP-hard nature of this issue, the Binary Quantum-Inspired Gravitational Search Algorithm (BQIGSA) is used in this paper to offer a fuzzy-based routing approach in IoT-based SDN, which aims to optimize energy, delay, and expected transmission rate. </span>Fuzzy modeling, and particularly fuzzy </span></span>routing algorithms<span>, are explained in this study in relation to the decision-making component. The synergy of Fuzzy Logic and BQIGSA offers a promising avenue for enhancing IoT-based SDNs. This innovative approach tackles the challenges of uncertainty, energy optimization, and adaptive decision-making that are inherent in IoT networks. The simulation is performed through MATLAB. The outcomes of simulations and tests demonstrated that the suggested approach performed better than the current methods in terms of energy usage, delay rate, and data delivery rate.</span></span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100920"},"PeriodicalIF":4.5,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49865017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1016/j.suscom.2023.100919
Andreas Schuler , Gabriele Kotsis
Developing green and sustainable software has become a prominent topic in research over the last years. While approaches are being constantly researched and developed to estimate and in turn optimize the energy consumption of software applications, there is still a lack of knowledge amongst practitioners how to address energy consumption as an important non-functional quality aspect and in turn develop sustainable software. By providing a comprehensive review on the state-of-the-art in mobile software energy consumption, we want to examine how research has contributed to fill this gap in knowledge over the last decade, by providing the foundations to estimate mobile software energy consumption. Therefore, we categorize available work amongst the approach taken to profile energy consumption, the individual contributions and the intended platform of use. Furthermore, we examine the availability of tools and frameworks for research and practice. The foundation for this review is a systematically collected selection of 134 studies published in between 2011 till 2021. From the data synthesized from the selected studies, we discuss key observations and future ongoing challenges in mobile software, energy consumption profiling. Furthermore, we believe that the key for a broad adoption is a common terminology. Henceforth, we propose an ontology describing mobile software energy consumption profiling from the results obtained in the presented review.
{"title":"A systematic review on techniques and approaches to estimate mobile software energy consumption","authors":"Andreas Schuler , Gabriele Kotsis","doi":"10.1016/j.suscom.2023.100919","DOIUrl":"10.1016/j.suscom.2023.100919","url":null,"abstract":"<div><p>Developing green and sustainable software has become a prominent topic in research over the last years. While approaches are being constantly researched and developed to estimate and in turn optimize the energy consumption of software applications, there is still a lack of knowledge amongst practitioners how to address energy consumption as an important non-functional quality aspect and in turn develop sustainable software. By providing a comprehensive review on the state-of-the-art in mobile software energy consumption, we want to examine how research has contributed to fill this gap in knowledge over the last decade, by providing the foundations to estimate mobile software energy consumption. Therefore, we categorize available work amongst the approach taken to profile energy consumption, the individual contributions and the intended platform of use. Furthermore, we examine the availability of tools and frameworks for research and practice. The foundation for this review is a systematically collected selection of 134 studies published in between 2011 till 2021. From the data synthesized from the selected studies, we discuss key observations and future ongoing challenges in mobile software, energy consumption profiling. Furthermore, we believe that the key for a broad adoption is a common terminology. Henceforth, we propose an ontology describing mobile software energy consumption profiling from the results obtained in the presented review.</p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"41 ","pages":"Article 100919"},"PeriodicalIF":4.5,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134935414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-14DOI: 10.1016/j.suscom.2023.100918
R. Ghafari, N. Mansouri
In fog computing, inefficient scheduling of user tasks causes more delays. Moreover, how to schedule tasks that need to be offloaded to fog nodes or cloud nodes has not been fully addressed. The task scheduling process needs to be optimized and efficient in order to address the issues of resource utilization, response time, and energy consumption. This paper proposes an Enhanced African Vultures Optimization Algorithm-based Task Scheduling Strategy (E-AVOA-TS) for fog-cloud computing. Through the proposed strategy, each village learns from its neighbors rather than from all of its members. The minimization of makespan, cost, and energy consumption in the proposed algorithm are considered as objective function. To prioritize tasks, the Best Worst Method (BWM) is used to handle the sensitivity of task delays. Latency-sensitive tasks are sent to the fog environment, while latency-tolerant tasks are sent to the cloud. E-AVOA is compared to other state-of-the-art optimizers using classic benchmark functions and ten benchmark tests from CEC-C06. Compared to other competitors, E-AVOA-TS outperforms makespan by 24.2%, cost by 16%, energy consumption by 4.7%, and DST% by 6.2% for large scale tasks. According to the simulation results, makespan shows improvements of 33%, 53%, and 48%, and energy consumption is reduced by 32%, 44%, and 5%, compared with PSG-M, IWC, and DCOHHOTS, respectively.
{"title":"E-AVOA-TS: Enhanced African vultures optimization algorithm-based task scheduling strategy for fog–cloud computing","authors":"R. Ghafari, N. Mansouri","doi":"10.1016/j.suscom.2023.100918","DOIUrl":"https://doi.org/10.1016/j.suscom.2023.100918","url":null,"abstract":"<div><p><span><span>In fog computing, inefficient scheduling of user tasks causes more delays. Moreover, how to schedule tasks that need to be offloaded to fog nodes or cloud nodes has not been fully addressed. The </span>task scheduling process needs to be optimized and efficient in order to address the issues of resource utilization, response time, and energy consumption. This paper proposes an Enhanced African Vultures Optimization Algorithm-based Task Scheduling Strategy (E-AVOA-TS) for fog-cloud computing. Through the proposed strategy, each village learns from its neighbors rather than from all of its members. The minimization of makespan, cost, and energy consumption in the proposed algorithm are considered as objective function. To prioritize tasks, the </span>Best Worst Method<span> (BWM) is used to handle the sensitivity of task delays. Latency-sensitive tasks are sent to the fog environment, while latency-tolerant tasks are sent to the cloud. E-AVOA is compared to other state-of-the-art optimizers using classic benchmark functions and ten benchmark tests from CEC-C06. Compared to other competitors, E-AVOA-TS outperforms makespan by 24.2%, cost by 16%, energy consumption by 4.7%, and DST% by 6.2% for large scale tasks. According to the simulation results, makespan shows improvements of 33%, 53%, and 48%, and energy consumption is reduced by 32%, 44%, and 5%, compared with PSG-M, IWC, and DCOHHOTS, respectively.</span></p></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"40 ","pages":"Article 100918"},"PeriodicalIF":4.5,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49827068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}