This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.
{"title":"Energy optimization and age of information enhancement in multi-UAV networks using deep reinforcement learning","authors":"Jeena Kim, Seunghyun Park, Hyunhee Park","doi":"10.1049/ell2.70063","DOIUrl":"https://doi.org/10.1049/ell2.70063","url":null,"abstract":"<p>This letter introduces an innovative approach for minimizing energy consumption in multi-unmanned aerial vehicles (multi-UAV) networks using deep reinforcement learning, with a focus on optimizing the age of information (AoI) in disaster environments. A hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption is proposed. By formulating the inter-UAV network path planning problem as a Markov decision process, a deep Q-network (DQN) strategy is applied to enable real-time decision making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. The extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25% in rural settings and 74.20% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142435634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Convolutional neural network (CNN)-based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non-linear, and convolution is not the best at capturing complex non-linear relationships in image data. To address these issues, a model based on an end-to-end custom non-linear transform network (CNTNet) is proposed. CNTNet combines a custom non-linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non-linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K.
{"title":"Low-light image enhancement via lightweight custom non-linear transform network","authors":"Yang Li","doi":"10.1049/ell2.70053","DOIUrl":"https://doi.org/10.1049/ell2.70053","url":null,"abstract":"<p>Convolutional neural network (CNN)-based models have shown significant progress in low light image enhancement. However, many existing models possess a large number of parameters, making them unsuitable for deployment on terminal devices. Moreover, adjustments to brightness, contrast, and colour in images are often non-linear, and convolution is not the best at capturing complex non-linear relationships in image data. To address these issues, a model based on an end-to-end custom non-linear transform network (CNTNet) is proposed. CNTNet combines a custom non-linear transform layer with CNN layers to achieve image contrast and detail enhancement. The CNT layer in this model introduces transformation parameters at multiple scales to manipulate input images within various ranges. CNTNet progressively processes images by stacking multiple non-linear transform layers and convolutional layers while integrating residual connections to capture and leverage subtle image features. The final output is generated through convolutional layers to obtain enhanced images. Experimental results of CNTNet demonstrate that, while maintaining a comparable level of image quality evaluation metrics to mainstream models, it significantly reduces the parameter count to only 2K.</p>","PeriodicalId":11556,"journal":{"name":"Electronics Letters","volume":null,"pages":null},"PeriodicalIF":0.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/ell2.70053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142404553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a novel software-based approach to enhancing stack smashing protection in C/C++ applications, specifically targeting return-oriented programming attacks, which remain a significant threat to firmware and software security. Traditional canary-based protections are vulnerable to brute-force and format string attacks. Additionally, many stack protection mechanisms require access to the source code or recompilation, complicating the security of existing binaries. This paper proposes a new method, aptly named