Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.005
Omar Imhemed Alramli , Zurina Mohd Hanapi , Mohamed Othman , Normalia Samian , Idawaty Ahmad
The increasing demand for high-speed, low-latency applications, especially with 5G mmWave technology, has led to challenges in TCP performance due to signal blockages, small buffers, and high Packet Error Rates (PERs). Existing congestion control algorithms (CCAs) struggle to fully utilize available bandwidth under these conditions. This paper proposes MSS-TCP, a novel congestion control algorithm designed for mmWave networks. MSS-TCP dynamically adjusts the congestion window (cwnd) based on the maximum segment size (MSS) and round-trip time (RTT), improving bandwidth utilization and congestion adaptability. The simulation results using the ns-3 network simulator show that MSS-TCP outperforms state-of-the-art CCAs, including NewReno, HighSpeed, CUBIC, and Bottleneck Bandwidth and Round-trip propagation time (BBR), and Fuzzy Logic-based (FB-TCP), particularly when the buffer matches the bandwidth-delay product (BDP), achieving a 24.26% to 45.43% improvement in throughput compared to BBR while maintaining low latency. These findings demonstrate that MSS-TCP enhances TCP performance in 5G mmWave networks, making it a promising solution for next-generation wireless communication.
{"title":"MSS-TCP: A congestion control algorithm for boosting TCP performance in mmwave cellular networks","authors":"Omar Imhemed Alramli , Zurina Mohd Hanapi , Mohamed Othman , Normalia Samian , Idawaty Ahmad","doi":"10.1016/j.icte.2025.05.005","DOIUrl":"10.1016/j.icte.2025.05.005","url":null,"abstract":"<div><div>The increasing demand for high-speed, low-latency applications, especially with 5G mmWave technology, has led to challenges in TCP performance due to signal blockages, small buffers, and high Packet Error Rates (PERs). Existing congestion control algorithms (CCAs) struggle to fully utilize available bandwidth under these conditions. This paper proposes MSS-TCP, a novel congestion control algorithm designed for mmWave networks. MSS-TCP dynamically adjusts the congestion window (cwnd) based on the maximum segment size (MSS) and round-trip time (RTT), improving bandwidth utilization and congestion adaptability. The simulation results using the ns-3 network simulator show that MSS-TCP outperforms state-of-the-art CCAs, including NewReno, HighSpeed, CUBIC, and Bottleneck Bandwidth and Round-trip propagation time (BBR), and Fuzzy Logic-based (FB-TCP), particularly when the buffer matches the bandwidth-delay product (BDP), achieving a 24.26% to 45.43% improvement in throughput compared to BBR while maintaining low latency. These findings demonstrate that MSS-TCP enhances TCP performance in 5G mmWave networks, making it a promising solution for next-generation wireless communication.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 631-635"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.06.007
Dahuin Jung
Advancements in natural language processing and computer vision have raised concerns about models inadvertently exposing private data and confidently misclassifying inputs. Machine unlearning has emerged as a solution, enabling the removal of specific data influences to meet privacy standards. This work focuses on unlearning in Instance-Removal (IR) and Class-Removal (CR) scenarios: IR targets the removal of individual data points, while CR eliminates all data related to a specific class. We propose EntUn, which maximizes entropy for the forget-set to reduce confidence in data to be forgotten and minimizes it for the retain-set to preserve discriminative power. An entropy-based intra-class mixup further stabilizes this process, using higher-entropy samples to guide controlled information removal. Experiments on CIFAR10, CIFAR100, and TinyImageNet show that EntUn outperforms state-of-the-art baselines, improving forgetting and enhancing privacy protection as confirmed by membership inference attack tests. This demonstrates entropy maximization as a robust strategy for effective unlearning.
{"title":"EntUn: Mitigating the forget-retain dilemma in unlearning via entropy","authors":"Dahuin Jung","doi":"10.1016/j.icte.2025.06.007","DOIUrl":"10.1016/j.icte.2025.06.007","url":null,"abstract":"<div><div>Advancements in natural language processing and computer vision have raised concerns about models inadvertently exposing private data and confidently misclassifying inputs. Machine unlearning has emerged as a solution, enabling the removal of specific data influences to meet privacy standards. This work focuses on unlearning in Instance-Removal (IR) and Class-Removal (CR) scenarios: IR targets the removal of individual data points, while CR eliminates all data related to a specific class. We propose <strong>EntUn</strong>, which maximizes entropy for the forget-set to reduce confidence in data to be forgotten and minimizes it for the retain-set to preserve discriminative power. An entropy-based intra-class mixup further stabilizes this process, using higher-entropy samples to guide controlled information removal. Experiments on CIFAR10, CIFAR100, and TinyImageNet show that <strong>EntUn</strong> outperforms state-of-the-art baselines, improving forgetting and enhancing privacy protection as confirmed by membership inference attack tests. This demonstrates entropy maximization as a robust strategy for effective unlearning.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 643-647"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.06.008
Montaser N.A. Ramadan , Mohammed A.H. Ali , Shin Yee Khoo , Mohammad Alkhedher
This paper examines the integration of Federated Learning (FL), TinyML, and IoT in resource-constrained edge devices, highlighting key challenges and opportunities. It reviews FL and TinyML frameworks with a focus on communication, privacy, accuracy, efficiency, and memory constraints. We propose a novel FL-IoT framework that combines over-the-air (OTA) AI model updates, LoRa-based distributed communication, and lossless data compression techniques such as Run-Length Encoding (RLE), Huffman coding, and LZW to reduce transmission cost, optimize local processing, and maintain data privacy. The framework features Raspberry Pi-based aggregation nodes and microcontroller-based IoT clients, enabling scalable, low-power learning across heterogeneous devices. Evaluation includes memory usage, communication cost, energy consumption, and accuracy trade-offs across multiple FL scenarios. Results show improved scalability and significant power savings compared to baseline FL setups. The proposed framework is particularly impactful in applications such as smart agriculture, healthcare, and smart cities. Future directions for real-time, privacy-preserving edge intelligence are discussed.
{"title":"Federated learning and TinyML on IoT edge devices: Challenges, advances, and future directions","authors":"Montaser N.A. Ramadan , Mohammed A.H. Ali , Shin Yee Khoo , Mohammad Alkhedher","doi":"10.1016/j.icte.2025.06.008","DOIUrl":"10.1016/j.icte.2025.06.008","url":null,"abstract":"<div><div>This paper examines the integration of Federated Learning (FL), TinyML, and IoT in resource-constrained edge devices, highlighting key challenges and opportunities. It reviews FL and TinyML frameworks with a focus on communication, privacy, accuracy, efficiency, and memory constraints. We propose a novel FL-IoT framework that combines over-the-air (OTA) AI model updates, LoRa-based distributed communication, and lossless data compression techniques such as Run-Length Encoding (RLE), Huffman coding, and LZW to reduce transmission cost, optimize local processing, and maintain data privacy. The framework features Raspberry Pi-based aggregation nodes and microcontroller-based IoT clients, enabling scalable, low-power learning across heterogeneous devices. Evaluation includes memory usage, communication cost, energy consumption, and accuracy trade-offs across multiple FL scenarios. Results show improved scalability and significant power savings compared to baseline FL setups. The proposed framework is particularly impactful in applications such as smart agriculture, healthcare, and smart cities. Future directions for real-time, privacy-preserving edge intelligence are discussed.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 754-768"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The integration of integrated sensing and communication (ISAC) with artificial intelligence (AI)-driven techniques has emerged as a transformative research frontier, attracting significant interest from both academia and industry. As sixth-generation (6G) networks advance to support ultra-reliable, low-latency, and high-capacity applications, machine learning (ML) has become a critical enabler for optimizing ISAC functionalities. Recent advancements in deep learning (DL) and deep reinforcement learning (DRL) have demonstrated immense potential in enhancing ISAC-based systems across diverse domains, including intelligent vehicular networks, autonomous mobility, unmanned aerial vehicles based communications, radar sensing, localization, millimeter wave/terahertz communication, and adaptive beamforming. However, despite these advancements, several challenges persist, such as real-time decision-making under resource constraints, robustness in adversarial environments, and scalability for large-scale deployments. This paper provides a comprehensive review of ML-driven ISAC methodologies, analyzing their impact on system design, computational efficiency, and real-world implementations, while also discussing existing challenges and future research directions to explore how AI can further enhance ISAC’s adaptability, resilience, and performance in next-generation wireless networks. By bridging theoretical advancements with practical implementations, this paper serves as a foundational reference for researchers, engineers, and industry stakeholders, aiming to leverage AI’s full potential in shaping the future of intelligent ISAC systems within the 6G ecosystem.
{"title":"Data-driven integrated sensing and communication: Recent advances, challenges, and future prospects","authors":"Hammam Salem , Haleema Sadia , MD Muzakkir Quamar , Adeb Magad , Mohammed Elrashidy , Nasir Saeed , Mudassir Masood","doi":"10.1016/j.icte.2025.06.010","DOIUrl":"10.1016/j.icte.2025.06.010","url":null,"abstract":"<div><div>The integration of integrated sensing and communication (ISAC) with artificial intelligence (AI)-driven techniques has emerged as a transformative research frontier, attracting significant interest from both academia and industry. As sixth-generation (6G) networks advance to support ultra-reliable, low-latency, and high-capacity applications, machine learning (ML) has become a critical enabler for optimizing ISAC functionalities. Recent advancements in deep learning (DL) and deep reinforcement learning (DRL) have demonstrated immense potential in enhancing ISAC-based systems across diverse domains, including intelligent vehicular networks, autonomous mobility, unmanned aerial vehicles based communications, radar sensing, localization, millimeter wave/terahertz communication, and adaptive beamforming. However, despite these advancements, several challenges persist, such as real-time decision-making under resource constraints, robustness in adversarial environments, and scalability for large-scale deployments. This paper provides a comprehensive review of ML-driven ISAC methodologies, analyzing their impact on system design, computational efficiency, and real-world implementations, while also discussing existing challenges and future research directions to explore how AI can further enhance ISAC’s adaptability, resilience, and performance in next-generation wireless networks. By bridging theoretical advancements with practical implementations, this paper serves as a foundational reference for researchers, engineers, and industry stakeholders, aiming to leverage AI’s full potential in shaping the future of intelligent ISAC systems within the 6G ecosystem.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 790-808"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.003
Sambi Reddy Gottam, Udit Narayana Kar
Direct communication links between nearby users can be established via device-to-device (D2D) communications, eliminating the need for a base station (BS) or remaining core networks. The D2D users’ transmission power is lower than the BS’s traffic burden. Nonorthogonal multiple access (NOMA) expertise allows a transmitter to direct multiple impulses at the same wavelength by power superposition, possibly enhancing spectrum efficiency. In this work, an adaptive recurrent neural network (ARNN) is developed to effectively handle the nonlinearity of transmission powers and channel diversity. Furthermore, a method called fuzzy C-means clustering (FCMC) is designed to group users on different subcarriers with different strengths. For spectrum utilization to improve, clustering is necessary. The advanced coati optimization algorithm (ACOA) is subsequently utilized to assign assets. The Levy Flight (LF) function is taken into consideration when choosing the weight value in the Coati Optimization Algorithm (COA). The simulation findings demonstrate that our method is better at increasing system throughput while meeting users’ file requests. This method enables the efficient use of resources and power control in interactions between devices. The proposed method is implemented in MATLAB, and its performance is evaluated via performance measures. It is compared with conventional approaches. The results indicate that the suggested method achieves superior outage probability values across different user counts, with values of 0.99465 for 40 users, 0.99946 for 60 users, 0.99946 for 80 users, and 0.999446 for 100 users. Comparatively, the Recurrent Neural Network-Honey Badger Algorithm (RNN-HBA) achieved slightly lower outage probabilities, whereas the Deep Belief Network (DBN) and Particle Swarm Optimization (PSO) demonstrated more significant variations, especially with a greater number of users.
{"title":"An Efficient Resource Allocation Mechanism with Fuzzy C-Means and Adaptive RNNs for D2D Communications in Cellular Networks","authors":"Sambi Reddy Gottam, Udit Narayana Kar","doi":"10.1016/j.icte.2025.05.003","DOIUrl":"10.1016/j.icte.2025.05.003","url":null,"abstract":"<div><div>Direct communication links between nearby users can be established via device-to-device (D2D) communications, eliminating the need for a base station (BS) or remaining core networks. The D2D users’ transmission power is lower than the BS’s traffic burden. Nonorthogonal multiple access (NOMA) expertise allows a transmitter to direct multiple impulses at the same wavelength by power superposition, possibly enhancing spectrum efficiency. In this work, an adaptive recurrent neural network (ARNN) is developed to effectively handle the nonlinearity of transmission powers and channel diversity. Furthermore, a method called fuzzy C-means clustering (FCMC) is designed to group users on different subcarriers with different strengths. For spectrum utilization to improve, clustering is necessary. The advanced coati optimization algorithm (ACOA) is subsequently utilized to assign assets. The Levy Flight (LF) function is taken into consideration when choosing the weight value in the Coati Optimization Algorithm (COA). The simulation findings demonstrate that our method is better at increasing system throughput while meeting users’ file requests. This method enables the efficient use of resources and power control in interactions between devices. The proposed method is implemented in MATLAB, and its performance is evaluated via performance measures. It is compared with conventional approaches. The results indicate that the suggested method achieves superior outage probability values across different user counts, with values of 0.99465 for 40 users, 0.99946 for 60 users, 0.99946 for 80 users, and 0.999446 for 100 users. Comparatively, the Recurrent Neural Network-Honey Badger Algorithm (RNN-HBA) achieved slightly lower outage probabilities, whereas the Deep Belief Network (DBN) and Particle Swarm Optimization (PSO) demonstrated more significant variations, especially with a greater number of users.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 743-753"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.04.016
Abdul Wadud , Anas Basalamah
High-speed and high-bandwidth capabilities provided by free space optical wireless communication (FSO-WC) improve communication technologies with better channel security. With its high carrier frequency, wide bandwidth, and use of unlicensed spectrum, FSO has been identified by researchers looking into innovations in next-generation wireless communications as a promising way to deliver ultrafast data links to meet the growing demands for massive connectivity and high data rates in a variety of 6G applications, such as cellular wireless backhauls and heterogeneous networks. However, issues like atmospheric turbulence, absorption, and scattering have a major impact on the system’s performance by raising the bit error rate (BER) and symbol error rate (SER). In order to tackle these problems, this paper looks at Deep Neural Network (DNN) models, particularly Multi-Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN). We experiment DNN-based equalizer in context of Open Radio Access Network (O-RAN), which aims to minimize SER and BER. According to the investigation, CNNs use more processing resources than MLPs, although offering superior error reduction. Our investigation shows that FSO can be adopted in high data rate front haul between the distributed units (DUs) and radio units (RUs).
{"title":"Optical wireless communications for next-generation radio access networks","authors":"Abdul Wadud , Anas Basalamah","doi":"10.1016/j.icte.2025.04.016","DOIUrl":"10.1016/j.icte.2025.04.016","url":null,"abstract":"<div><div>High-speed and high-bandwidth capabilities provided by free space optical wireless communication (FSO-WC) improve communication technologies with better channel security. With its high carrier frequency, wide bandwidth, and use of unlicensed spectrum, FSO has been identified by researchers looking into innovations in next-generation wireless communications as a promising way to deliver ultrafast data links to meet the growing demands for massive connectivity and high data rates in a variety of 6G applications, such as cellular wireless backhauls and heterogeneous networks. However, issues like atmospheric turbulence, absorption, and scattering have a major impact on the system’s performance by raising the bit error rate (BER) and symbol error rate (SER). In order to tackle these problems, this paper looks at Deep Neural Network (DNN) models, particularly Multi-Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN). We experiment DNN-based equalizer in context of Open Radio Access Network (O-RAN), which aims to minimize SER and BER. According to the investigation, CNNs use more processing resources than MLPs, although offering superior error reduction. Our investigation shows that FSO can be adopted in high data rate front haul between the distributed units (DUs) and radio units (RUs).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 721-727"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.004
Hyein Seo , Jae-Ho Park , Jangho Lee , Byung Chang Chung
Identifying informative features in bioinformatics is challenging due to their small proportion within large datasets. We propose a scalable and interpretable feature selection framework for cancer RNA-seq by transforming non-image bio-data into 2D formats and applying convolutional neural networks (CNNs) with transfer learning for efficient classification. Explainable artificial intelligence (XAI) techniques identify and prioritize important features, while principal component analysis (PCA) determines the optimal number of selected features, ensuring transparency and reliability. Comparative analysis of CNN and XAI highlights the effectiveness of our approach, providing a robust framework for high-dimensional genomic data analysis with applications in cancer diagnosis and prognosis.
{"title":"Explainable AI based feature selection in cancer RNA-seq","authors":"Hyein Seo , Jae-Ho Park , Jangho Lee , Byung Chang Chung","doi":"10.1016/j.icte.2025.05.004","DOIUrl":"10.1016/j.icte.2025.05.004","url":null,"abstract":"<div><div>Identifying informative features in bioinformatics is challenging due to their small proportion within large datasets. We propose a scalable and interpretable feature selection framework for cancer RNA-seq by transforming non-image bio-data into 2D formats and applying convolutional neural networks (CNNs) with transfer learning for efficient classification. Explainable artificial intelligence (XAI) techniques identify and prioritize important features, while principal component analysis (PCA) determines the optimal number of selected features, ensuring transparency and reliability. Comparative analysis of CNN and XAI highlights the effectiveness of our approach, providing a robust framework for high-dimensional genomic data analysis with applications in cancer diagnosis and prognosis.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 603-610"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.006
Qin Yang, Sang-Jo Yoo
Task offloading in multi-access edge computing (MEC) systems is critical for managing computational tasks in dynamic urban environments. Existing strategies face challenges such as high communication overheads and regional performance deviations including centralized and distributed methods. Clustering approaches have been explored to address these issues, yet they often rely on physical proximity to form clusters, overlooking the variability in task rate distributions across edges. To overcome these limitations, this paper proposes a graph-driven inter-cluster resource distribution (GIRD) clustering scheme that clusters edge nodes based on task request distribution and computing resource status, ensuring similar resource utilization across clusters. Building on this, a proximal policy optimization (PPO)-enabled intra-cluster task offloading algorithm (PITO) is introduced to determine one execution server for task offloading—either an edge server within a cluster or a cloud server—using various network state information. This dynamic decision-making process optimizes a multi-objective function that includes task processing delay, consumed energy, success rate, and cloud cost. Simulation results demonstrate the proposed GIRD-PITO framework achieves superior task success rates, reduced delays, and improved regional performance fairness, making it a promising solution for large-scale MEC systems. 2018 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
{"title":"Multi-objective task offloading optimization using deep reinforcement learning with resource distribution clustering","authors":"Qin Yang, Sang-Jo Yoo","doi":"10.1016/j.icte.2025.05.006","DOIUrl":"10.1016/j.icte.2025.05.006","url":null,"abstract":"<div><div>Task offloading in multi-access edge computing (MEC) systems is critical for managing computational tasks in dynamic urban environments. Existing strategies face challenges such as high communication overheads and regional performance deviations including centralized and distributed methods. Clustering approaches have been explored to address these issues, yet they often rely on physical proximity to form clusters, overlooking the variability in task rate distributions across edges. To overcome these limitations, this paper proposes a graph-driven inter-cluster resource distribution (GIRD) clustering scheme that clusters edge nodes based on task request distribution and computing resource status, ensuring similar resource utilization across clusters. Building on this, a proximal policy optimization (PPO)-enabled intra-cluster task offloading algorithm (PITO) is introduced to determine one execution server for task offloading—either an edge server within a cluster or a cloud server—using various network state information. This dynamic decision-making process optimizes a multi-objective function that includes task processing delay, consumed energy, success rate, and cloud cost. Simulation results demonstrate the proposed GIRD-PITO framework achieves superior task success rates, reduced delays, and improved regional performance fairness, making it a promising solution for large-scale MEC systems. 2018 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 734-742"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.06.003
Keunho Byeon, Jeewoo Lim, Jin Tae Kwak
Blurred images often result from camera shake or object motion, complicating the visual inspection and recognition of objects. To address this issue, we propose DeLECA, a dual-branch Transformer architecture that leverages the complementary nature of the paired blurred images, obtained with long-exposure times, and noisy images, captured with short exposure times, to improve the quality and sharpness of the blurred images. We evaluate DeLECA using two public datasets, GoPro and HIDE. Experimental results show that DeLECA outperforms existing methods, achieving PSNR of 36.08 dB and SSIM of 0.965 on the GoPro dataset, and 40.05 dB and 0.972 on the HIDE dataset.
{"title":"DeLECA: Deblurring for Long and short Exposure images with a dual-branch multimodal cross attention mechanism","authors":"Keunho Byeon, Jeewoo Lim, Jin Tae Kwak","doi":"10.1016/j.icte.2025.06.003","DOIUrl":"10.1016/j.icte.2025.06.003","url":null,"abstract":"<div><div>Blurred images often result from camera shake or object motion, complicating the visual inspection and recognition of objects. To address this issue, we propose DeLECA, a dual-branch Transformer architecture that leverages the complementary nature of the paired blurred images, obtained with long-exposure times, and noisy images, captured with short exposure times, to improve the quality and sharpness of the blurred images. We evaluate DeLECA using two public datasets, GoPro and HIDE. Experimental results show that DeLECA outperforms existing methods, achieving PSNR of 36.08 dB and SSIM of 0.965 on the GoPro dataset, and 40.05 dB and 0.972 on the HIDE dataset.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 611-617"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Vehicles (IoV) requires secure and efficient authentication. This study proposes a decentralized protocol leveraging edge nodes and consortium blockchain to enhance security while reducing cloud dependency. A mathematical model evaluates performance and scalability, while simulations validate resilience against network failures, attacks, and topology changes. The protocol integrates with IoT infrastructure and considers implementation costs. Results demonstrate improved efficiency, security, and feasibility for large-scale vehicular networks.
{"title":"Edge-enhanced decentralized vehicle authentication protocol for IoV","authors":"Nai-Wei Lo , Wen-Hsien Yu , Jheng-Jia Huang , Yu-Chi Chen","doi":"10.1016/j.icte.2025.06.013","DOIUrl":"10.1016/j.icte.2025.06.013","url":null,"abstract":"<div><div>The Internet of Vehicles (IoV) requires secure and efficient authentication. This study proposes a decentralized protocol leveraging edge nodes and consortium blockchain to enhance security while reducing cloud dependency. A mathematical model evaluates performance and scalability, while simulations validate resilience against network failures, attacks, and topology changes. The protocol integrates with IoT infrastructure and considers implementation costs. Results demonstrate improved efficiency, security, and feasibility for large-scale vehicular networks.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 624-630"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}