Syed Mohsin Bokhari, Muhammad Shafi, Sarmad Sohaib, Sanad Al Maskari, Muhammad Ahsan Iftikhar
This work introduces a multiagent deep reinforcement learning (MADRL) framework for energy harvesting (EH) in unmanned aerial vehicle (UAV) networks aided by reconfigurable intelligent surfaces (RIS). The core goal is to maximize quantities of harvested energy subject to quality of service (QoS) constraints in dynamic wireless setups. The considered model involves centralized training alongside decentralized execution, together with replay-based learning to yield stable convergence. Extensive experiments are included in a comparison between MADRL and evolution strategies (ES), deep deterministic policy gradient (DDPG), stochastic DDPG (SD3), and state of the art twin delayed DDPG (TD3)–based approaches. The outcomes verify that MADRL ensures an average throughput of over 300 Mbps when deployed with four UAVs, exceeding DDPG and closing in on TD3 and adaptive TD3, and consumes minimal processing time and memory resources. In time-domain tests, MADRL maintains an EH fraction in the range of approximately 0.27–0.31 ((mean≈0.29)), and in dual-domain evaluation, it sustains an EH fraction of approximately 0.73–0.75 (mean≈0.74), indicating robust energy performance under both scenarios. Parameter sensitivity analysis also confirms the selection of hyperparameter α, β, η, and γ as optimal trade-offs at α = 0.6, β = 0.8, η = 3 × 10−4, γ = 0.98. The computational tests verify potential practicability in real-time applications, where MADRL only takes 0.42 s and 1.8 GBs, respectively, for each episode, and in terms of memory. These confirmations reflect the potential applicability of MADRL in scalable UAV RIS networks and thus provide potential applications in energy-efficient wireless setups.
{"title":"A Multiagent Deep Reinforcement Learning Scheme for Energy Use Optimization in UAV-Enabled Wireless Networks With Reconfigurable Intelligent Surfaces","authors":"Syed Mohsin Bokhari, Muhammad Shafi, Sarmad Sohaib, Sanad Al Maskari, Muhammad Ahsan Iftikhar","doi":"10.1155/int/1477541","DOIUrl":"https://doi.org/10.1155/int/1477541","url":null,"abstract":"<p>This work introduces a multiagent deep reinforcement learning (MADRL) framework for energy harvesting (EH) in unmanned aerial vehicle (UAV) networks aided by reconfigurable intelligent surfaces (RIS). The core goal is to maximize quantities of harvested energy subject to quality of service (QoS) constraints in dynamic wireless setups. The considered model involves centralized training alongside decentralized execution, together with replay-based learning to yield stable convergence. Extensive experiments are included in a comparison between MADRL and evolution strategies (ES), deep deterministic policy gradient (DDPG), stochastic DDPG (SD3), and state of the art twin delayed DDPG (TD3)–based approaches. The outcomes verify that MADRL ensures an average throughput of over 300 Mbps when deployed with four UAVs, exceeding DDPG and closing in on TD3 and adaptive TD3, and consumes minimal processing time and memory resources. In time-domain tests, MADRL maintains an EH fraction in the range of approximately 0.27–0.31 ((mean≈0.29)), and in dual-domain evaluation, it sustains an EH fraction of approximately 0.73–0.75 (mean≈0.74), indicating robust energy performance under both scenarios. Parameter sensitivity analysis also confirms the selection of hyperparameter <i>α</i>, <i>β</i>, <i>η</i>, and <i>γ</i> as optimal trade-offs at <i>α</i> = 0.6, <i>β</i> = 0.8, <i>η</i> = 3 × 10<sup>−4</sup>, <i>γ</i> = 0.98. The computational tests verify potential practicability in real-time applications, where MADRL only takes 0.42 s and 1.8 GBs, respectively, for each episode, and in terms of memory. These confirmations reflect the potential applicability of MADRL in scalable UAV RIS networks and thus provide potential applications in energy-efficient wireless setups.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2026 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1477541","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147315511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phat Nguyen Huu, Kien Hoang Trung, Quang Tran Minh
Traffic condition estimation and distinguishing from real-time video cameras are essential research in traffic information systems (ITSs), providing accurate and live traffic information to commuters and traffic management, as video-based traffic data are available ubiquitously. However, many challenges need to be resolved, including the complexity of video data with multiple vehicle types in chaos traffic environments, such as those in developing countries like Vietnam, the lack of specific training datasets, and the appropriate selection or modification of pretrained deep learning (DL) models for video data processing. This paper proposes a novel traffic congestion prediction method based on real-time traffic video analysis utilizing appropriate DL models. The proposed approach using YOLOv10, YOLOv8, and Faster R-CNN to detect and classify vehicles and region of interest (ROI) to calculate the occupied their area, which resulted in a prototype system for real-world applications consisting of three main stages: (i) traffic video data in a chaos traffic environment, specifically in Hanoi, Vietnam, are collected, preprocessed, and annotated for traffic conditions; (ii) various pretrained DL models for video data analysis specified to traffic condition estimations are studied to apply to the above traffic video data; and (iii) thorough evaluations using the implemented prototype with real-time video traffic data to confirm the effectiveness and the efficiency of the proposed method have been analyzed. The results indicate that the proposed method achieves up to 94% accuracy in vehicle detection and processes at a speed of 27 frames per second. The implemented prototype also provides a visual presentation of traffic density and makes reliable congestion predictions to commuters and management. The proposed approach not only supports traffic operation and management in regulating traffic flows but also paves the way for applying technology to address complex urban traffic challenges, especially in developing countries.
{"title":"Distinguish Traffic Condition Based on YOLOv10 Model and Region of Interest (ROI)","authors":"Phat Nguyen Huu, Kien Hoang Trung, Quang Tran Minh","doi":"10.1155/int/4252938","DOIUrl":"https://doi.org/10.1155/int/4252938","url":null,"abstract":"<p>Traffic condition estimation and distinguishing from real-time video cameras are essential research in traffic information systems (ITSs), providing accurate and live traffic information to commuters and traffic management, as video-based traffic data are available ubiquitously. However, many challenges need to be resolved, including the complexity of video data with multiple vehicle types in chaos traffic environments, such as those in developing countries like Vietnam, the lack of specific training datasets, and the appropriate selection or modification of pretrained deep learning (DL) models for video data processing. This paper proposes a novel traffic congestion prediction method based on real-time traffic video analysis utilizing appropriate DL models. The proposed approach using YOLOv10, YOLOv8, and Faster R-CNN to detect and classify vehicles and region of interest (ROI) to calculate the occupied their area, which resulted in a prototype system for real-world applications consisting of three main stages: (i) traffic video data in a chaos traffic environment, specifically in Hanoi, Vietnam, are collected, preprocessed, and annotated for traffic conditions; (ii) various pretrained DL models for video data analysis specified to traffic condition estimations are studied to apply to the above traffic video data; and (iii) thorough evaluations using the implemented prototype with real-time video traffic data to confirm the effectiveness and the efficiency of the proposed method have been analyzed. The results indicate that the proposed method achieves up to 94% accuracy in vehicle detection and processes at a speed of 27 frames per second. The implemented prototype also provides a visual presentation of traffic density and makes reliable congestion predictions to commuters and management. The proposed approach not only supports traffic operation and management in regulating traffic flows but also paves the way for applying technology to address complex urban traffic challenges, especially in developing countries.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2026 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/4252938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146217604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Liu and P. Wang, “Some q-Rung Orthopair Fuzzy Aggregation Operators and their Applications to Multiple-Attribute Decision Making,” International Journal of Intelligent Systems 33, no. 2 (2018): 259–280, https://doi.org/10.1002/int.21927.
In the article, there is an error in the definition of indeterminacy degree presented in Definition 1. The correct Definition 1 is shown below: