Pub Date : 2025-10-01Epub Date: 2025-08-06DOI: 10.1016/j.icte.2025.07.011
Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee
Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.
{"title":"EDAS: Effective Data Augmentation Strategies for test-time adaptation","authors":"Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee","doi":"10.1016/j.icte.2025.07.011","DOIUrl":"10.1016/j.icte.2025.07.011","url":null,"abstract":"<div><div>Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 888-893"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-09DOI: 10.1016/j.icte.2025.08.001
Md Ilias Bappi , Jannat Afrin Juthy , Kyungbaek Kim
Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness worldwide. Early diagnosis is crucial for preventing irreversible vision loss, but manual screening methods are time-consuming and often inconsistent. Deep learning (DL) techniques have shown promise in automating DR detection; however, many existing models still struggle to capture subtle lesions and distinguish fine-grained severity stages. In this survey, we comprehensively review recent DL-based approaches for DR classification, emphasizing attention mechanisms, feature fusion strategies, and stage-wise grading. To address current gaps, we propose a hybrid taxonomy that identifies effective combinations such as texture-based attention, CNN-Transformer fusion, and multi-modal integration. Additionally, we validate our previously published model, STMFNet, a spatial texture-aware attention network based on EfficientNet, across four benchmark datasets. On EyePACS and Messidor, STMFNet achieves up to 98.10% accuracy, outperforming several state-of-the-art (SOTA) models under similar settings. This study provides both a consolidated overview of DR detection advancements and a practical benchmark framework to guide future research in AI-assisted DR classification.
{"title":"Deep learning-based diabetic retinopathy recognition and grading: Challenges, gaps, and an improved approach — A survey","authors":"Md Ilias Bappi , Jannat Afrin Juthy , Kyungbaek Kim","doi":"10.1016/j.icte.2025.08.001","DOIUrl":"10.1016/j.icte.2025.08.001","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness worldwide. Early diagnosis is crucial for preventing irreversible vision loss, but manual screening methods are time-consuming and often inconsistent. Deep learning (DL) techniques have shown promise in automating DR detection; however, many existing models still struggle to capture subtle lesions and distinguish fine-grained severity stages. In this survey, we comprehensively review recent DL-based approaches for DR classification, emphasizing attention mechanisms, feature fusion strategies, and stage-wise grading. To address current gaps, we propose a hybrid taxonomy that identifies effective combinations such as texture-based attention, CNN-Transformer fusion, and multi-modal integration. Additionally, we validate our previously published model, STMFNet, a spatial texture-aware attention network based on EfficientNet, across four benchmark datasets. On EyePACS and Messidor, STMFNet achieves up to 98.10% accuracy, outperforming several state-of-the-art (SOTA) models under similar settings. This study provides both a consolidated overview of DR detection advancements and a practical benchmark framework to guide future research in AI-assisted DR classification.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 993-1013"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-29DOI: 10.1016/j.icte.2025.06.012
Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai
Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.
2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
{"title":"A lightweight remote sensing image fusion method for vehicle perception","authors":"Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai","doi":"10.1016/j.icte.2025.06.012","DOIUrl":"10.1016/j.icte.2025.06.012","url":null,"abstract":"<div><div>Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.</div><div>2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 933-938"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-19DOI: 10.1016/j.icte.2025.08.003
Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae
NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.
{"title":"Neural-NGBoost: Natural gradient boosting with neural network base learners","authors":"Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae","doi":"10.1016/j.icte.2025.08.003","DOIUrl":"10.1016/j.icte.2025.08.003","url":null,"abstract":"<div><div>NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 974-980"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-05-18DOI: 10.1016/j.icte.2025.05.007
Md. Alamgir Hossain
With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.
{"title":"Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity","authors":"Md. Alamgir Hossain","doi":"10.1016/j.icte.2025.05.007","DOIUrl":"10.1016/j.icte.2025.05.007","url":null,"abstract":"<div><div>With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 875-880"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-04DOI: 10.1016/j.icte.2025.05.011
Seung-Yeol Lee , Hyuntai Kim
This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.
{"title":"Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber","authors":"Seung-Yeol Lee , Hyuntai Kim","doi":"10.1016/j.icte.2025.05.011","DOIUrl":"10.1016/j.icte.2025.05.011","url":null,"abstract":"<div><div>This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 870-874"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-21DOI: 10.1016/j.icte.2025.06.005
Muhammad Adeel Altaf , Min Young Kim
Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.
{"title":"Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications","authors":"Muhammad Adeel Altaf , Min Young Kim","doi":"10.1016/j.icte.2025.06.005","DOIUrl":"10.1016/j.icte.2025.06.005","url":null,"abstract":"<div><div>Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 809-818"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-06-28DOI: 10.1016/j.icte.2025.05.013
Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang
Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.
{"title":"Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review","authors":"Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang","doi":"10.1016/j.icte.2025.05.013","DOIUrl":"10.1016/j.icte.2025.05.013","url":null,"abstract":"<div><div>Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 769-789"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01Epub Date: 2025-05-22DOI: 10.1016/j.icte.2025.04.017
Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee
This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.
{"title":"Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory","authors":"Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee","doi":"10.1016/j.icte.2025.04.017","DOIUrl":"10.1016/j.icte.2025.04.017","url":null,"abstract":"<div><div>This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 597-602"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid development of the Internet of Things (IoT) has driven a significant shift in computing architectures, leading to the rise of the cloud continuum—a flexible framework that combines cloud services with edge and fog computing. While existing survey papers have contributed valuable insights, they often focus narrowly on specific aspects of the continuum or do not fully address its evolving complexities. These limitations underscore the need for a comprehensive and up-to-date analysis of the field. This study bridges these gaps by presenting an extensive review of the cloud continuum, covering its role in enhancing resource management, improving real-time data processing, integrating machine learning approaches, and optimizing user experiences across diverse applications. We examine how edge devices, fog nodes, and cloud infrastructures synergize to enable decentralized data processing, reducing latency in critical areas such as smart cities, healthcare, and autonomous vehicles. Additionally, this study explores the integration of machine learning across edge, fog, and cloud layers, with a focus on inference and distributed learning methods. By highlighting how these technologies enhance efficiency, scalability, and intelligent decision-making, this review provides a holistic perspective on the cloud continuum. Our analysis offers valuable insights into future research directions, emphasizing innovations that can drive next-generation computing systems toward greater efficiency and adaptability.
{"title":"The journey to cloud as a continuum: Opportunities, challenges, and research directions","authors":"Md. Mahmodul Hasan , Tangina Sultana , Md. Delowar Hossain , Ashis Kumar Mandal , Thien-Thu Ngo , Ga-Won Lee , Eui-Nam Huh","doi":"10.1016/j.icte.2025.04.015","DOIUrl":"10.1016/j.icte.2025.04.015","url":null,"abstract":"<div><div>The rapid development of the Internet of Things (IoT) has driven a significant shift in computing architectures, leading to the rise of the cloud continuum—a flexible framework that combines cloud services with edge and fog computing. While existing survey papers have contributed valuable insights, they often focus narrowly on specific aspects of the continuum or do not fully address its evolving complexities. These limitations underscore the need for a comprehensive and up-to-date analysis of the field. This study bridges these gaps by presenting an extensive review of the cloud continuum, covering its role in enhancing resource management, improving real-time data processing, integrating machine learning approaches, and optimizing user experiences across diverse applications. We examine how edge devices, fog nodes, and cloud infrastructures synergize to enable decentralized data processing, reducing latency in critical areas such as smart cities, healthcare, and autonomous vehicles. Additionally, this study explores the integration of machine learning across edge, fog, and cloud layers, with a focus on inference and distributed learning methods. By highlighting how these technologies enhance efficiency, scalability, and intelligent decision-making, this review provides a holistic perspective on the cloud continuum. Our analysis offers valuable insights into future research directions, emphasizing innovations that can drive next-generation computing systems toward greater efficiency and adaptability.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 666-689"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}