Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.07.001
DongCheon Kim , JunHyeok Choi , SeungYong Yoon , Seog Chung Seo
In March 2025, NIST selected HQC as a standardized PQC algorithm. Since HQC relies on binary polynomial operations, optimizations for prime-field schemes like Kyber are not directly applicable. Furthermore, optimizing HQC on Cortex-M4 involves constraints that complicate objective performance evaluation, which has hindered active research in this area. We address these issues and optimize dense-dense polynomial multiplication, HQC’s main computational bottleneck. Using the PQM4 benchmark framework, our implementation achieves speedups of 1139.53–1347.69% in key generation, 1139.53–1253.73% in encapsulation, and 1042.09–1198.78% in decapsulation over PQClean, and 38.78–45.81%, 38.18–45.58%, and 34.76–43.56% improvements over the NTL-based reference, depending on the security level.
{"title":"Optimized implementation of HQC on Cortex-M4","authors":"DongCheon Kim , JunHyeok Choi , SeungYong Yoon , Seog Chung Seo","doi":"10.1016/j.icte.2025.07.001","DOIUrl":"10.1016/j.icte.2025.07.001","url":null,"abstract":"<div><div>In March 2025, NIST selected HQC as a standardized PQC algorithm. Since HQC relies on binary polynomial operations, optimizations for prime-field schemes like Kyber are not directly applicable. Furthermore, optimizing HQC on Cortex-M4 involves constraints that complicate objective performance evaluation, which has hindered active research in this area. We address these issues and optimize dense-dense polynomial multiplication, HQC’s main computational bottleneck. Using the PQM4 benchmark framework, our implementation achieves speedups of 1139.53–1347.69% in key generation, 1139.53–1253.73% in encapsulation, and 1042.09–1198.78% in decapsulation over PQClean, and 38.78–45.81%, 38.18–45.58%, and 34.76–43.56% improvements over the NTL-based reference, depending on the security level.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 939-944"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.07.011
Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee
Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.
{"title":"EDAS: Effective Data Augmentation Strategies for test-time adaptation","authors":"Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee","doi":"10.1016/j.icte.2025.07.011","DOIUrl":"10.1016/j.icte.2025.07.011","url":null,"abstract":"<div><div>Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 888-893"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.05.007
Md. Alamgir Hossain
With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.
{"title":"Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity","authors":"Md. Alamgir Hossain","doi":"10.1016/j.icte.2025.05.007","DOIUrl":"10.1016/j.icte.2025.05.007","url":null,"abstract":"<div><div>With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 875-880"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.06.012
Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai
Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.
2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
{"title":"A lightweight remote sensing image fusion method for vehicle perception","authors":"Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai","doi":"10.1016/j.icte.2025.06.012","DOIUrl":"10.1016/j.icte.2025.06.012","url":null,"abstract":"<div><div>Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.</div><div>2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 933-938"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.08.003
Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae
NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.
{"title":"Neural-NGBoost: Natural gradient boosting with neural network base learners","authors":"Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae","doi":"10.1016/j.icte.2025.08.003","DOIUrl":"10.1016/j.icte.2025.08.003","url":null,"abstract":"<div><div>NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 974-980"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.icte.2025.05.011
Seung-Yeol Lee , Hyuntai Kim
This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.
{"title":"Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber","authors":"Seung-Yeol Lee , Hyuntai Kim","doi":"10.1016/j.icte.2025.05.011","DOIUrl":"10.1016/j.icte.2025.05.011","url":null,"abstract":"<div><div>This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 870-874"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.013
Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang
Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.
{"title":"Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review","authors":"Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang","doi":"10.1016/j.icte.2025.05.013","DOIUrl":"10.1016/j.icte.2025.05.013","url":null,"abstract":"<div><div>Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 769-789"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.06.005
Muhammad Adeel Altaf , Min Young Kim
Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.
{"title":"Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications","authors":"Muhammad Adeel Altaf , Min Young Kim","doi":"10.1016/j.icte.2025.06.005","DOIUrl":"10.1016/j.icte.2025.06.005","url":null,"abstract":"<div><div>Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 809-818"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.04.017
Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee
This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.
{"title":"Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory","authors":"Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee","doi":"10.1016/j.icte.2025.04.017","DOIUrl":"10.1016/j.icte.2025.04.017","url":null,"abstract":"<div><div>This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 597-602"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.icte.2025.05.009
Su-Jin Kim , Jun Sung Moon , Sung-Yoon Jung
The Artificial Pancreas System (APS) is a device designed to monitor blood glucose levels in real-time and automatically regulate insulin for diabetes patients. Blood glucose prediction plays a crucial role in these systems by enabling proactive responses to glucose variations, thereby preventing risks such as hypoglycemia or hyperglycemia and assisting patients in managing their condition effectively. However, Continuous Glucose Monitoring (CGM) sensor data often contain significant sensor noise. Without effectively reducing the sensor noise, prediction accuracy can be severely compromised. Therefore, we first present a deep learning (DL) method for noise reduction in CGM data and, second, propose a long-term blood glucose prediction approach based on the system response function, utilizing a multi-input(e.g., blood glucose, carbohydrate (CHO) intake, and insulin). In this study, simglucose, based on the UVA-PADOVA simulator, was utilized to test and evaluate the proposed methods. As a result, we found that noise reduction using deep learning (DL) was significantly more effective than conventional filtering methods. Furthermore, the proposed long-term blood glucose prediction approach reliably tracked blood glucose fluctuations in custom scenarios and accurately predicted daily glucose patterns. Even in random scenarios, the proposed model accurately captured blood glucose trends, closely aligning with actual BG values and demonstrating remarkable performance.
{"title":"Long-term blood glucose prediction using deep learning-based noise reduction","authors":"Su-Jin Kim , Jun Sung Moon , Sung-Yoon Jung","doi":"10.1016/j.icte.2025.05.009","DOIUrl":"10.1016/j.icte.2025.05.009","url":null,"abstract":"<div><div>The Artificial Pancreas System (APS) is a device designed to monitor blood glucose levels in real-time and automatically regulate insulin for diabetes patients. Blood glucose prediction plays a crucial role in these systems by enabling proactive responses to glucose variations, thereby preventing risks such as hypoglycemia or hyperglycemia and assisting patients in managing their condition effectively. However, Continuous Glucose Monitoring (CGM) sensor data often contain significant sensor noise. Without effectively reducing the sensor noise, prediction accuracy can be severely compromised. Therefore, we first present a deep learning (DL) method for noise reduction in CGM data and, second, propose a long-term blood glucose prediction approach based on the system response function, utilizing a multi-input(e.g., blood glucose, carbohydrate (CHO) intake, and insulin). In this study, simglucose, based on the UVA-PADOVA simulator, was utilized to test and evaluate the proposed methods. As a result, we found that noise reduction using deep learning (DL) was significantly more effective than conventional filtering methods. Furthermore, the proposed long-term blood glucose prediction approach reliably tracked blood glucose fluctuations in custom scenarios and accurately predicted daily glucose patterns. Even in random scenarios, the proposed model accurately captured blood glucose trends, closely aligning with actual BG values and demonstrating remarkable performance.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 715-720"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}