Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.07.009
Guang-Jhe Lin , Cheng-Feng Hung , Chih-Heng Ke
With the rise of bandwidth-intensive applications, such as video streaming and cloud services, efficient routing decision networks have become increasingly important. Bandwidth allocation issues arise from various causes. This paper examines the Bandwidth Starvation Problem (BSP), where routing decisions that insufficiently account for low-demand flows hinder high-demand flows. Current Reinforcement Learning (RL)-based hop-by-hop routing methods overlook bandwidth demand factors, worsening the BSP. We propose a bandwidth demand-oriented reward function and a Deep Reinforcement Learning (DRL) framework to address this challenge. Experiments on Topology Zoo topologies demonstrate proposed approach enhances throughput, utilization, and maximum service capacity over existing methods.
{"title":"A Deep Reinforcement Learning-based bandwidth demand-oriented routing in Software-Defined Networking","authors":"Guang-Jhe Lin , Cheng-Feng Hung , Chih-Heng Ke","doi":"10.1016/j.icte.2025.07.009","DOIUrl":"10.1016/j.icte.2025.07.009","url":null,"abstract":"<div><div>With the rise of bandwidth-intensive applications, such as video streaming and cloud services, efficient routing decision networks have become increasingly important. Bandwidth allocation issues arise from various causes. This paper examines the Bandwidth Starvation Problem (BSP), where routing decisions that insufficiently account for low-demand flows hinder high-demand flows. Current Reinforcement Learning (RL)-based hop-by-hop routing methods overlook bandwidth demand factors, worsening the BSP. We propose a bandwidth demand-oriented reward function and a Deep Reinforcement Learning (DRL) framework to address this challenge. Experiments on Topology Zoo topologies demonstrate proposed approach enhances throughput, utilization, and maximum service capacity over existing methods.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1146-1151"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.08.009
Eunho Jung, Dukyun Nam
Fall accidents are increasing, and monitoring them using real-time CCTV systems remains challenging. This paper compares the performance of YOLOv11 and RT-DETRv2 models for real-time fall detection. Experimental results show that YOLOv11 outperforms RT-DETRv2 in terms of inference speed, making it more suitable for real-time applications. Unlike earlier studies, we propose feature map-based knowledge distillation during the model training process to improve model performance. The proposed YOLO-based fall detection system transfers intermediate representations from a teacher to a student network and optimises two complementary objectives: spatial alignment via Mean-Squared-Error (MSE) loss and channel-wise distribution alignment via Kullback–Leibler (KL) divergence. Experiments improved the mean Average Precision (mAP) and reduced processing time by 0.8ms. Evaluation on AI-hub abnormal behavior datasets confirmed a 0.02 increase in accuracy and F1-score, demonstrating the effectiveness of the proposed distillation method in real-time environments.
{"title":"Lightweight YOLO-based real-time fall detection using feature map-level knowledge distillation","authors":"Eunho Jung, Dukyun Nam","doi":"10.1016/j.icte.2025.08.009","DOIUrl":"10.1016/j.icte.2025.08.009","url":null,"abstract":"<div><div>Fall accidents are increasing, and monitoring them using real-time CCTV systems remains challenging. This paper compares the performance of YOLOv11 and RT-DETRv2 models for real-time fall detection. Experimental results show that YOLOv11 outperforms RT-DETRv2 in terms of inference speed, making it more suitable for real-time applications. Unlike earlier studies, we propose feature map-based knowledge distillation during the model training process to improve model performance. The proposed YOLO-based fall detection system transfers intermediate representations from a teacher to a student network and optimises two complementary objectives: spatial alignment via Mean-Squared-Error (MSE) loss and channel-wise distribution alignment via Kullback–Leibler (KL) divergence. Experiments improved the mean Average Precision (mAP) and reduced processing time by 0.8ms. Evaluation on AI-hub abnormal behavior datasets confirmed a 0.02 increase in accuracy and F1-score, demonstrating the effectiveness of the proposed distillation method in real-time environments.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1152-1161"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.10.015
Yu-Hao Liu , Yu-Chun Chang , Yan-Hua Ma
The rapid growth of short video platforms, along with advances in big data and the Internet of Things (IoT), has significantly increased the volume of data being generated, providing a strong foundation for the development of artificial intelligence (AI). Among AI technologies, deep learning based on neural networks has achieved notable success in fields such as speech recognition, natural language processing, and image analysis. However, as these models become more complex, traditional hardware architectures face growing limitations. The slowdown of Moore's Law and increasing concerns about power consumption highlight the urgent need for more efficient hardware solutions. In resource-constrained environments like real-time and edge computing, achieving a balance between performance, power, and latency is especially important. This review addresses these challenges through three main contributions: (1) it categorizes and analyzes key optimization techniques at both the algorithm and hardware levels, offering a clear theoretical framework; (2) it summarizes recent advancements in accelerator design, with a focus on technologies such as collaborative acceleration and in-memory computing; and (3) it explores future trends and challenges, offering insights into the evolution of neural network accelerators and potential solutions to emerging technical bottlenecks.
{"title":"Advancements in neural network acceleration: a comprehensive review","authors":"Yu-Hao Liu , Yu-Chun Chang , Yan-Hua Ma","doi":"10.1016/j.icte.2025.10.015","DOIUrl":"10.1016/j.icte.2025.10.015","url":null,"abstract":"<div><div>The rapid growth of short video platforms, along with advances in big data and the Internet of Things (IoT), has significantly increased the volume of data being generated, providing a strong foundation for the development of artificial intelligence (AI). Among AI technologies, deep learning based on neural networks has achieved notable success in fields such as speech recognition, natural language processing, and image analysis. However, as these models become more complex, traditional hardware architectures face growing limitations. The slowdown of Moore's Law and increasing concerns about power consumption highlight the urgent need for more efficient hardware solutions. In resource-constrained environments like real-time and edge computing, achieving a balance between performance, power, and latency is especially important. This review addresses these challenges through three main contributions: (1) it categorizes and analyzes key optimization techniques at both the algorithm and hardware levels, offering a clear theoretical framework; (2) it summarizes recent advancements in accelerator design, with a focus on technologies such as collaborative acceleration and in-memory computing; and (3) it explores future trends and challenges, offering insights into the evolution of neural network accelerators and potential solutions to emerging technical bottlenecks.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1232-1256"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.11.005
Syed Rizwan Hassan , Muhammad Usama Tanveer , Sunil Prajapat , Mohammad Shabaz
The emerging Internet of Medical Things (IoMT) architecture enables real-time processing and monitoring of medical information. Its growing applications expose it to sophisticated cyber threats. Intrusion Detection Systems (IDS) have therefore become indispensable in ensuring confidentiality, integrity, and regulatory compliance. Existing surveys address the security issues, but they lack dataset analysis and integration of emerging approaches. This review presents a systematic review that classifies IoMT-IDSs across deployment strategies, response mechanisms, and evaluation metrics. We develop a multi-dimensional taxonomy that highlights the gaps and outlines a roadmap with federated IDS, blockchain validation, and explainable AI for secure healthcare.
{"title":"A comprehensive survey on intrusion detection in internet of medical things: Datasets, federated learning, blockchain, and future research directions","authors":"Syed Rizwan Hassan , Muhammad Usama Tanveer , Sunil Prajapat , Mohammad Shabaz","doi":"10.1016/j.icte.2025.11.005","DOIUrl":"10.1016/j.icte.2025.11.005","url":null,"abstract":"<div><div>The emerging Internet of Medical Things (IoMT) architecture enables real-time processing and monitoring of medical information. Its growing applications expose it to sophisticated cyber threats. Intrusion Detection Systems (IDS) have therefore become indispensable in ensuring confidentiality, integrity, and regulatory compliance. Existing surveys address the security issues, but they lack dataset analysis and integration of emerging approaches. This review presents a systematic review that classifies IoMT-IDSs across deployment strategies, response mechanisms, and evaluation metrics. We develop a multi-dimensional taxonomy that highlights the gaps and outlines a roadmap with federated IDS, blockchain validation, and explainable AI for secure healthcare.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1291-1310"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.10.007
Al-Imran , Mostafa Zaman Chowdhury , Rafat Bin Mofidul , Yeong Min Jang
Free space optical (FSO) communication systems offer high-bandwidth, secure data transmission over wireless channels. Recent advancements in machine learning (ML) and deep learning (DL) have considerable promise in mitigating these challenges and enhancing the reliability and efficiency of FSO systems. This comprehensive survey examines ML and DL techniques applied to FSO systems, covering advancements in channel modeling and estimation, and demodulation. Additionally, this review highlights the role of ML and DL in hybrid FSO/RF systems, focusing on resource management, dynamic switching, relay selection, underwater FSO, and ATP. Emerging trends, future research directions, standardization efforts, and unresolved challenges are discussed. Our overall conclusion highlights that DL, especially hybrid and attention-based models, demonstrates strong potential in dynamic channel adaptation and tracking under turbulence, while reinforcement learning shows promise for real-time resource allocation and switching.
{"title":"Machine learning and deep learning in FSO communication: A comprehensive survey","authors":"Al-Imran , Mostafa Zaman Chowdhury , Rafat Bin Mofidul , Yeong Min Jang","doi":"10.1016/j.icte.2025.10.007","DOIUrl":"10.1016/j.icte.2025.10.007","url":null,"abstract":"<div><div>Free space optical (FSO) communication systems offer high-bandwidth, secure data transmission over wireless channels. Recent advancements in machine learning (ML) and deep learning (DL) have considerable promise in mitigating these challenges and enhancing the reliability and efficiency of FSO systems. This comprehensive survey examines ML and DL techniques applied to FSO systems, covering advancements in channel modeling and estimation, and demodulation. Additionally, this review highlights the role of ML and DL in hybrid FSO/RF systems, focusing on resource management, dynamic switching, relay selection, underwater FSO, and ATP. Emerging trends, future research directions, standardization efforts, and unresolved challenges are discussed. Our overall conclusion highlights that DL, especially hybrid and attention-based models, demonstrates strong potential in dynamic channel adaptation and tracking under turbulence, while reinforcement learning shows promise for real-time resource allocation and switching.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1026-1046"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.09.013
Seungseok Sin , Sangmi Moon , Cheol Hong Kim , Intae Hwang
This study proposes an energy-efficient framework for non-terrestrial networks (NTNs) integrating a low Earth orbit (LEO) satellite, an unmanned aerial vehicle (UAV)-mounted reconfigurable intelligent surface (RIS), and a terrestrial user. The framework jointly optimizes the UAV’s 3D trajectory, satellite beamforming vectors, and RIS reflection coefficients to maximize energy efficiency (EE), accounting for UAV propulsion energy consumption and Quality of Service (QoS) constraints. The resulting non-convex fractional problem is solved using a low-complexity iterative algorithm combining successive convex approximation (SCA) and second-order cone programming (SOCP). Simulation results reveal up to 35% EE improvement over baseline schemes, highlighting the framework’s scalability and practicality for sustainable NTN systems.
{"title":"SCA-based energy-efficient design of UAV-RIS-assisted NTN systems with joint trajectory and beamforming optimization","authors":"Seungseok Sin , Sangmi Moon , Cheol Hong Kim , Intae Hwang","doi":"10.1016/j.icte.2025.09.013","DOIUrl":"10.1016/j.icte.2025.09.013","url":null,"abstract":"<div><div>This study proposes an energy-efficient framework for non-terrestrial networks (NTNs) integrating a low Earth orbit (LEO) satellite, an unmanned aerial vehicle (UAV)-mounted reconfigurable intelligent surface (RIS), and a terrestrial user. The framework jointly optimizes the UAV’s 3D trajectory, satellite beamforming vectors, and RIS reflection coefficients to maximize energy efficiency (EE), accounting for UAV propulsion energy consumption and Quality of Service (QoS) constraints. The resulting non-convex fractional problem is solved using a low-complexity iterative algorithm combining successive convex approximation (SCA) and second-order cone programming (SOCP). Simulation results reveal up to 35% EE improvement over baseline schemes, highlighting the framework’s scalability and practicality for sustainable NTN systems.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1120-1126"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.09.016
Chae-Won Park , Ji-Hye Lim , Seung-Jun Lee , Keum-Seong Nam, Qin Yang, Sang-Jo Yoo
This paper presents a real-time vehicle detection and tracking system using an unmanned aerial vehicle (UAV) to address challenges in dynamic urban environments. The system combines a convolutional neural network (CNN) for vehicle detection with a deep Q-network (DQN)-based navigation policy for continuous tracking. Input images are enhanced using contrast limited adaptive histogram equalization (CLAHE) and unsharp masking. The CNN jointly predicts vehicle center coordinates and probabilistic heatmaps, while a self-attention module captures long-range spatial dependencies to improve detection under clutter and occlusion. The DQN is trained on multi-step spatiotemporal states to learn optimal UAV movement strategies under diverse weather and structural conditions. Experiments conducted in a three-dimensional (3D) urban simulation environment using Unity’s machine learning agents (ML-Agents) show that the self-attention design reduced pixel-level localization error by about 7%, and the DQN-based tracking policy achieved stable convergence after approximately 2000–3000 episodes. These results demonstrate high tracking accuracy and system stability, highlighting the potential of the proposed approach for real-world UAV-based traffic monitoring applications.
2018 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
{"title":"UAV-Based Vehicle Detection and Tracking in Urban Environments Using Multi-Task CNN and Deep Reinforcement Learning","authors":"Chae-Won Park , Ji-Hye Lim , Seung-Jun Lee , Keum-Seong Nam, Qin Yang, Sang-Jo Yoo","doi":"10.1016/j.icte.2025.09.016","DOIUrl":"10.1016/j.icte.2025.09.016","url":null,"abstract":"<div><div>This paper presents a real-time vehicle detection and tracking system using an unmanned aerial vehicle (UAV) to address challenges in dynamic urban environments. The system combines a convolutional neural network (CNN) for vehicle detection with a deep Q-network (DQN)-based navigation policy for continuous tracking. Input images are enhanced using contrast limited adaptive histogram equalization (CLAHE) and unsharp masking. The CNN jointly predicts vehicle center coordinates and probabilistic heatmaps, while a self-attention module captures long-range spatial dependencies to improve detection under clutter and occlusion. The DQN is trained on multi-step spatiotemporal states to learn optimal UAV movement strategies under diverse weather and structural conditions. Experiments conducted in a three-dimensional (3D) urban simulation environment using Unity’s machine learning agents (ML-Agents) show that the self-attention design reduced pixel-level localization error by about 7%, and the DQN-based tracking policy achieved stable convergence after approximately 2000–3000 episodes. These results demonstrate high tracking accuracy and system stability, highlighting the potential of the proposed approach for real-world UAV-based traffic monitoring applications.</div><div>2018 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1173-1180"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.10.006
Saddaf Rubab , Ghulam E Mustafa Abro , Hifza Mustafa , Saad Khan Baloch , Sufyan Ali Memon , Nasir Saeed
Integration of Reconfigurable Intelligent Surfaces (RIS) with Unmanned Aerial Vehicles (UAVs) generates a revolutionary paradigm for next-generation wireless communications, particularly in IoT and 6G applications. UAVs provide adaptable and versatile deployment options; yet, they encounter obstacles including signal degradation, restricted Line-of-Sight (LoS), and computing limitations. RIS technology mitigates these constraints by rearranging the wireless propagation environment to improve signal quality, energy efficiency, and connection dependability. This survey offers a detailed examination of RIS-assisted UAV communication systems, addressing system models, channel characteristics, and essential performance metrics including SNR, BER, and outage probability. We further investigate control schemes utilising deep reinforcement and federated learning for real-time trajectory optimisation and reconfigurable intelligent surface phase adjustment. This study presents the novel notion of multi-edge cooperative frameworks alongside classical designs, wherein UAVs delegate demanding tasks – such as trajectory planning and channel estimation – to proximate edge servers, including mobile base stations or other UAVs. These architectures provide diminished latency, enhanced scalability, and immediate flexibility. The paper also discusses outstanding issues in physical-layer security, edge coordination, and deployment complexity. This study establishes a standard for creating resilient, intelligent, and scalable RIS-UAV communication systems that meet the requirements of future smart cities and critical mission settings.
{"title":"RIS-assisted UAV communications: A review of system models, frameworks and outage performance","authors":"Saddaf Rubab , Ghulam E Mustafa Abro , Hifza Mustafa , Saad Khan Baloch , Sufyan Ali Memon , Nasir Saeed","doi":"10.1016/j.icte.2025.10.006","DOIUrl":"10.1016/j.icte.2025.10.006","url":null,"abstract":"<div><div>Integration of Reconfigurable Intelligent Surfaces (RIS) with Unmanned Aerial Vehicles (UAVs) generates a revolutionary paradigm for next-generation wireless communications, particularly in IoT and 6G applications. UAVs provide adaptable and versatile deployment options; yet, they encounter obstacles including signal degradation, restricted Line-of-Sight (LoS), and computing limitations. RIS technology mitigates these constraints by rearranging the wireless propagation environment to improve signal quality, energy efficiency, and connection dependability. This survey offers a detailed examination of RIS-assisted UAV communication systems, addressing system models, channel characteristics, and essential performance metrics including SNR, BER, and outage probability. We further investigate control schemes utilising deep reinforcement and federated learning for real-time trajectory optimisation and reconfigurable intelligent surface phase adjustment. This study presents the novel notion of multi-edge cooperative frameworks alongside classical designs, wherein UAVs delegate demanding tasks – such as trajectory planning and channel estimation – to proximate edge servers, including mobile base stations or other UAVs. These architectures provide diminished latency, enhanced scalability, and immediate flexibility. The paper also discusses outstanding issues in physical-layer security, edge coordination, and deployment complexity. This study establishes a standard for creating resilient, intelligent, and scalable RIS-UAV communication systems that meet the requirements of future smart cities and critical mission settings.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1186-1199"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.11.014
Liang Han, Xiaosen Shi, Tingting Lu
The rise of wireless devices makes interference a key challenge for reliable communication in dense spectrum-sharing networks. This paper proposes a graph neural network (GNN)-based power control algorithm to minimize the worst-case outage probability by using statistical channel state information (CSI), i.e., position information. By representing the network as a fully connected directed graph with node and edge features derived from transceiver positions, the GNN employs message-passing layers to aggregate interference patterns and infer near-optimal transmit powers. Simulation results demonstrate the scalability and generalization capability of the proposed method, confirming its suitability for real-time deployment in large-scale wireless systems.
{"title":"Graph neural networks for minimizing worst-case outage probability in dense spectrum-sharing networks","authors":"Liang Han, Xiaosen Shi, Tingting Lu","doi":"10.1016/j.icte.2025.11.014","DOIUrl":"10.1016/j.icte.2025.11.014","url":null,"abstract":"<div><div>The rise of wireless devices makes interference a key challenge for reliable communication in dense spectrum-sharing networks. This paper proposes a graph neural network (GNN)-based power control algorithm to minimize the worst-case outage probability by using statistical channel state information (CSI), i.e., position information. By representing the network as a fully connected directed graph with node and edge features derived from transceiver positions, the GNN employs message-passing layers to aggregate interference patterns and infer near-optimal transmit powers. Simulation results demonstrate the scalability and generalization capability of the proposed method, confirming its suitability for real-time deployment in large-scale wireless systems.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1226-1231"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.icte.2025.09.010
Sanga Park , An Gia Vien , Chul Lee
We propose an efficient RGBW remosaicing algorithm that converts RGBW images into Bayer images using learned kernel-based local interpolation and global residual learning. First, the proposed algorithm extracts local and global features from an input RGBW image. Then, we develop a learned kernel-based interpolation module to generate an intermediate Bayer image using the local features. Next, the proposed algorithm generates a residual image containing complementary information. Finally, we obtain the reconstructed Bayer image by refining the intermediate Bayer image with the residual image. Experimental results demonstrate that the proposed algorithm significantly outperforms state-of-the-art algorithms.
{"title":"Efficient RGBW remosaicing using local interpolation and global refinement","authors":"Sanga Park , An Gia Vien , Chul Lee","doi":"10.1016/j.icte.2025.09.010","DOIUrl":"10.1016/j.icte.2025.09.010","url":null,"abstract":"<div><div>We propose an efficient RGBW remosaicing algorithm that converts RGBW images into Bayer images using learned kernel-based local interpolation and global residual learning. First, the proposed algorithm extracts local and global features from an input RGBW image. Then, we develop a learned kernel-based interpolation module to generate an intermediate Bayer image using the local features. Next, the proposed algorithm generates a residual image containing complementary information. Finally, we obtain the reconstructed Bayer image by refining the intermediate Bayer image with the residual image. Experimental results demonstrate that the proposed algorithm significantly outperforms state-of-the-art algorithms.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 6","pages":"Pages 1220-1225"},"PeriodicalIF":4.2,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145705493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}