首页 > 最新文献

ICT Express最新文献

英文 中文
Optimized implementation of HQC on Cortex-M4 在Cortex-M4上优化实现HQC
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.07.001
DongCheon Kim , JunHyeok Choi , SeungYong Yoon , Seog Chung Seo
In March 2025, NIST selected HQC as a standardized PQC algorithm. Since HQC relies on binary polynomial operations, optimizations for prime-field schemes like Kyber are not directly applicable. Furthermore, optimizing HQC on Cortex-M4 involves constraints that complicate objective performance evaluation, which has hindered active research in this area. We address these issues and optimize dense-dense polynomial multiplication, HQC’s main computational bottleneck. Using the PQM4 benchmark framework, our implementation achieves speedups of 1139.53–1347.69% in key generation, 1139.53–1253.73% in encapsulation, and 1042.09–1198.78% in decapsulation over PQClean, and 38.78–45.81%, 38.18–45.58%, and 34.76–43.56% improvements over the NTL-based reference, depending on the security level.
2025年3月,NIST选择HQC作为标准化的PQC算法。由于HQC依赖于二元多项式运算,所以像Kyber这样的素域方案的优化并不直接适用。此外,在Cortex-M4上优化HQC涉及到复杂的客观性能评价约束,阻碍了该领域的积极研究。我们解决了这些问题,并优化了高密度多项式乘法,这是HQC的主要计算瓶颈。使用PQM4基准框架,我们的实现在PQClean上实现了密钥生成1139.53-1347.69%,封装1139.53-1253.73%,解封装1042.09-1198.78%的速度提升,以及基于ntl的参考的38.78-45.81%,38.18-45.58%和34.76-43.56%的速度提升,具体取决于安全级别。
{"title":"Optimized implementation of HQC on Cortex-M4","authors":"DongCheon Kim ,&nbsp;JunHyeok Choi ,&nbsp;SeungYong Yoon ,&nbsp;Seog Chung Seo","doi":"10.1016/j.icte.2025.07.001","DOIUrl":"10.1016/j.icte.2025.07.001","url":null,"abstract":"<div><div>In March 2025, NIST selected HQC as a standardized PQC algorithm. Since HQC relies on binary polynomial operations, optimizations for prime-field schemes like Kyber are not directly applicable. Furthermore, optimizing HQC on Cortex-M4 involves constraints that complicate objective performance evaluation, which has hindered active research in this area. We address these issues and optimize dense-dense polynomial multiplication, HQC’s main computational bottleneck. Using the PQM4 benchmark framework, our implementation achieves speedups of 1139.53–1347.69% in key generation, 1139.53–1253.73% in encapsulation, and 1042.09–1198.78% in decapsulation over PQClean, and 38.78–45.81%, 38.18–45.58%, and 34.76–43.56% improvements over the NTL-based reference, depending on the security level.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 939-944"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EDAS: Effective Data Augmentation Strategies for test-time adaptation EDAS:测试时间适应的有效数据增强策略
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.07.011
Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee
Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.
测试时间自适应(TTA)是一种在推理过程中仅使用未标记的测试数据更新模型参数的方法。与提供标签的监督学习不同,由于使用原始样本和增强样本的预测之间存在差异,数据增强在TTA设置中可能无法有效地发挥作用。我们通过引入一种新方法来解决这一限制,该方法采用为每个转换定制的具有不同适应策略的选择增强。我们的方法被设计为一个插件解决方案,可以很容易地集成到现有的方法中。大量的实验表明,在各种具有挑战性的场景下,我们的方法优于ImageNet-C、VisDA2021和ImageNet-Sketch数据集中现有的基线。
{"title":"EDAS: Effective Data Augmentation Strategies for test-time adaptation","authors":"Mansoo Jung ,&nbsp;Sunbeom Jeong ,&nbsp;Youngwook Kim ,&nbsp;Jungwoo Lee","doi":"10.1016/j.icte.2025.07.011","DOIUrl":"10.1016/j.icte.2025.07.011","url":null,"abstract":"<div><div>Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 888-893"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity 深度q -学习入侵检测系统(DQ-IDS):一种新的自适应和自学习网络安全强化学习方法
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.05.007
Md. Alamgir Hossain
With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.
随着网络威胁的日益复杂,传统的入侵检测系统(IDS)往往无法适应不断变化的攻击模式,导致高误报率和对零日攻击的检测不足。本研究提出深度q -学习入侵检测系统(DQ-IDS),这是一种基于强化学习(RL)的新型方法,旨在动态学习网络攻击行为并不断提高检测性能。与传统的机器学习(ML)和基于深度学习(DL)的IDS模型依赖于静态的预训练分类器不同,DQ-IDS采用深度q网络(DQN),具有经验回放和自适应贪婪探索功能,可以自主对良性和恶意网络流量进行分类。经验回放的整合减轻了灾难性遗忘,而自适应探索确保了学习效率和威胁检测之间的最佳权衡。奖励驱动的培训机制加强了正确的分类并惩罚错误,从而降低了假阳性和假阴性率。对真实网络数据集的大量实证评估表明,DQ-IDS的检测准确率达到97.18%,在攻击检测和计算效率方面都明显优于传统的IDS解决方案。这项工作引入了一种向自适应、自我学习的网络安全系统的范式转变,该系统能够在动态网络环境中实时、强大地缓解威胁。
{"title":"Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity","authors":"Md. Alamgir Hossain","doi":"10.1016/j.icte.2025.05.007","DOIUrl":"10.1016/j.icte.2025.05.007","url":null,"abstract":"<div><div>With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 875-880"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight remote sensing image fusion method for vehicle perception 一种用于车辆感知的轻型遥感图像融合方法
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.06.012
Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai
Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.
2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
遥感图像融合是增强遥感图像信息的重要手段。然而,现有融合技术在计算资源和存储容量方面的局限性使得实时处理变得困难。为此,提出了一种基于知识蒸馏的轻型汽车遥感图像融合方法。利用知识蒸馏技术将复杂的教师模型知识转移到轻量级的学生模型中,在保持较高融合精度的同时显著降低了模型复杂度。实验结果表明,该方法在无人机数据集上表现良好,模型权值仅为0.641M.2025韩国通信与信息科学研究所。这是一篇基于CC by-nc-nd许可(http://creativecommons.org/licenses/by-nc-nd/4.0/)的开放获取文章。
{"title":"A lightweight remote sensing image fusion method for vehicle perception","authors":"Yangyang Zhao ,&nbsp;Jiannan Su ,&nbsp;Wenjun Li ,&nbsp;Zhiyong Yu ,&nbsp;Xiaowei Dai","doi":"10.1016/j.icte.2025.06.012","DOIUrl":"10.1016/j.icte.2025.06.012","url":null,"abstract":"<div><div>Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.</div><div>2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 933-938"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-NGBoost: Natural gradient boosting with neural network base learners neural - ngboost:基于神经网络学习器的自然梯度增强
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.08.003
Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae
NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.
NGBoost在概率和点估计任务中显示出了令人鼓舞的结果。然而,由于该方法的基础学习器是基于决策树的,因此该方法是否可以扩展到神经结构系统中还不清楚。为了解决这个问题,我们设计了一个neural - ngboost框架,用轻量级神经网络取代基础学习器,并引入联合梯度估计用于提升过程。在自然梯度增强的基础上,通过推断自然梯度迭代更新神经学习器,并根据其概率分布更新参数得分。实验结果表明,与其他增强方法相比,Neural-NGBoost在各种数据集上都取得了更好的性能。
{"title":"Neural-NGBoost: Natural gradient boosting with neural network base learners","authors":"Jamshidjon Ganiev ,&nbsp;Deok-Woong Kim ,&nbsp;Seung-Hwan Bae","doi":"10.1016/j.icte.2025.08.003","DOIUrl":"10.1016/j.icte.2025.08.003","url":null,"abstract":"<div><div>NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 974-980"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber 基于人工智能的梯度折射率光纤折射率分布预测
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 DOI: 10.1016/j.icte.2025.05.011
Seung-Yeol Lee , Hyuntai Kim
This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.
提出了一种基于深度神经网络(DNN)的梯度折射率多模光纤折射率预测方法。该模型使用模拟数据进行训练,并在选择(或结构化)和随机测试集中实现了小于1%的平均损失。这种人工智能驱动的方法在定制光纤设计、非线性光学和快速光纤性能表征方面具有潜在的应用前景。未来的发展可能包括使用实际数据和扩展模型来预测折射率分布,进一步增强其通用性。
{"title":"Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber","authors":"Seung-Yeol Lee ,&nbsp;Hyuntai Kim","doi":"10.1016/j.icte.2025.05.011","DOIUrl":"10.1016/j.icte.2025.05.011","url":null,"abstract":"<div><div>This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 870-874"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review 基于人工智能的电动汽车电池健康状态和剩余使用寿命评估综述
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 DOI: 10.1016/j.icte.2025.05.013
Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang
Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.
锂离子电池对电动汽车(ev)至关重要,但会随着时间的推移而退化,需要准确的健康状态(SOH)和剩余使用寿命(RUL)估计。本文综述了最近基于人工智能的方法,特别是卷积和循环神经网络,它们在预测方面的有效性。它讨论了关键的优化策略,如特征选择、参数调整和迁移学习。对公共数据集(NASA, CALCE, Oxford)进行基准评估。本文还评估了模型的复杂性、性能指标和部署挑战。最后,它概述了改进电池管理系统的未来方向,以支持更高效、可靠和可扩展的集成到现实世界的电动汽车应用中。
{"title":"Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review","authors":"Md Shahriar Nazim ,&nbsp;Arbil Chakma ,&nbsp;Md. Ibne Joha,&nbsp;Syed Samiul Alam,&nbsp;Md Minhazur Rahman,&nbsp;Miftahul Khoir Shilahul Umam,&nbsp;Yeong Min Jang","doi":"10.1016/j.icte.2025.05.013","DOIUrl":"10.1016/j.icte.2025.05.013","url":null,"abstract":"<div><div>Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 769-789"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications 自动驾驶车辆中的多目标检测与跟踪:增强关联计算及其多模态应用综述
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 DOI: 10.1016/j.icte.2025.06.005
Muhammad Adeel Altaf , Min Young Kim
Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.
三维(3D)目标跟踪在计算机视觉应用中至关重要,特别是在自动驾驶、机器人和监视中。尽管取得了进步,但有效利用多模态数据来改进多目标检测和跟踪(MODT)仍然具有挑战性。本研究介绍了ACMODT,一种基于亲和计算的多目标检测和跟踪框架,它集成了摄像头(2D)和激光雷达(3D)数据,以增强自动驾驶中的MODT性能。该方法利用EPNet作为主干,利用2D-3D特征融合来准确生成提案。深度神经网络(DNN)提取鲁棒的外观和几何特征,而改进的亲和计算模块结合了精炼升压相关特征(RBCF)和3d扩展几何IoU (3D-XGIoU)进行精确的对象关联。运动预测使用卡尔曼滤波(KF)进行细化,基于高斯混合模型(GMM)的数据关联确保一致的跟踪。在用于定量分析的KITTI汽车跟踪基准和用于可视化的辐射数据集上进行的实验表明,与最先进的多目标跟踪(MOT)方法相比,我们的方法具有更高的跟踪精度和精度,证明了其实时目标跟踪的有效性。
{"title":"Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications","authors":"Muhammad Adeel Altaf ,&nbsp;Min Young Kim","doi":"10.1016/j.icte.2025.06.005","DOIUrl":"10.1016/j.icte.2025.06.005","url":null,"abstract":"<div><div>Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 809-818"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory 增强数据采集系统:使用排队理论的云边缘传感器网络的性能量化
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 DOI: 10.1016/j.icte.2025.04.017
Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee
This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.
本研究利用M/M/c/K排队理论,探讨一种云端边缘感测器基础架构,以分析农业资料系统的效能。它侧重于优化数据处理,并评估系统配置对性能的影响。该模型显著提高了效率和可伸缩性,最大限度地减少了对大量物理基础设施的需求。分析显示,两层的利用率都超过90%,突出了该模型对各种物联网应用的适用性。M/M/c/K排队模型解决了农业云边缘传感器网络的可扩展性和实时数据处理挑战,改进了缺乏动态可扩展性的传统方法。该模型旨在优化资源利用和减少数据处理延迟,在精准农业中被证明是至关重要的,在精准农业中,及时的数据对决策至关重要。它的多功能性扩展到各种需要高效实时分析和资源管理的农业应用。
{"title":"Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory","authors":"Jose Wanderlei Rocha ,&nbsp;Eder Gomes ,&nbsp;Vandirleya Barbosa ,&nbsp;Arthur Sabino ,&nbsp;Luiz Nelson Lima ,&nbsp;Gustavo Callou ,&nbsp;Francisco Airton Silva ,&nbsp;Eunmi Choi ,&nbsp;Tuan Anh Nguyen ,&nbsp;Dugki Min ,&nbsp;Jae-Woo Lee","doi":"10.1016/j.icte.2025.04.017","DOIUrl":"10.1016/j.icte.2025.04.017","url":null,"abstract":"<div><div>This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 597-602"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Long-term blood glucose prediction using deep learning-based noise reduction 基于深度学习的降噪长期血糖预测
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 DOI: 10.1016/j.icte.2025.05.009
Su-Jin Kim , Jun Sung Moon , Sung-Yoon Jung
The Artificial Pancreas System (APS) is a device designed to monitor blood glucose levels in real-time and automatically regulate insulin for diabetes patients. Blood glucose prediction plays a crucial role in these systems by enabling proactive responses to glucose variations, thereby preventing risks such as hypoglycemia or hyperglycemia and assisting patients in managing their condition effectively. However, Continuous Glucose Monitoring (CGM) sensor data often contain significant sensor noise. Without effectively reducing the sensor noise, prediction accuracy can be severely compromised. Therefore, we first present a deep learning (DL) method for noise reduction in CGM data and, second, propose a long-term blood glucose prediction approach based on the system response function, utilizing a multi-input(e.g., blood glucose, carbohydrate (CHO) intake, and insulin). In this study, simglucose, based on the UVA-PADOVA simulator, was utilized to test and evaluate the proposed methods. As a result, we found that noise reduction using deep learning (DL) was significantly more effective than conventional filtering methods. Furthermore, the proposed long-term blood glucose prediction approach reliably tracked blood glucose fluctuations in custom scenarios and accurately predicted daily glucose patterns. Even in random scenarios, the proposed model accurately captured blood glucose trends, closely aligning with actual BG values and demonstrating remarkable performance.
人工胰腺系统(APS)是一种用于糖尿病患者实时监测血糖水平并自动调节胰岛素的设备。血糖预测在这些系统中起着至关重要的作用,它能够对血糖变化做出主动反应,从而预防低血糖或高血糖等风险,并帮助患者有效地控制病情。然而,连续血糖监测(CGM)传感器数据往往包含显著的传感器噪声。如果不能有效降低传感器噪声,则会严重影响预测精度。因此,我们首先提出了一种用于CGM数据降噪的深度学习(DL)方法,其次,提出了一种基于系统响应函数的长期血糖预测方法,利用多输入(例如:血糖、碳水化合物(CHO)摄入和胰岛素)。在本研究中,simglucose基于UVA-PADOVA模拟器,用于测试和评估所提出的方法。结果,我们发现使用深度学习(DL)的降噪明显比传统的过滤方法更有效。此外,提出的长期血糖预测方法可靠地跟踪定制场景中的血糖波动,并准确预测每日血糖模式。即使在随机情况下,所提出的模型也能准确捕获血糖趋势,与实际BG值密切一致,并表现出显著的性能。
{"title":"Long-term blood glucose prediction using deep learning-based noise reduction","authors":"Su-Jin Kim ,&nbsp;Jun Sung Moon ,&nbsp;Sung-Yoon Jung","doi":"10.1016/j.icte.2025.05.009","DOIUrl":"10.1016/j.icte.2025.05.009","url":null,"abstract":"<div><div>The Artificial Pancreas System (APS) is a device designed to monitor blood glucose levels in real-time and automatically regulate insulin for diabetes patients. Blood glucose prediction plays a crucial role in these systems by enabling proactive responses to glucose variations, thereby preventing risks such as hypoglycemia or hyperglycemia and assisting patients in managing their condition effectively. However, Continuous Glucose Monitoring (CGM) sensor data often contain significant sensor noise. Without effectively reducing the sensor noise, prediction accuracy can be severely compromised. Therefore, we first present a deep learning (DL) method for noise reduction in CGM data and, second, propose a long-term blood glucose prediction approach based on the system response function, utilizing a multi-input(e.g., blood glucose, carbohydrate (CHO) intake, and insulin). In this study, simglucose, based on the UVA-PADOVA simulator, was utilized to test and evaluate the proposed methods. As a result, we found that noise reduction using deep learning (DL) was significantly more effective than conventional filtering methods. Furthermore, the proposed long-term blood glucose prediction approach reliably tracked blood glucose fluctuations in custom scenarios and accurately predicted daily glucose patterns. Even in random scenarios, the proposed model accurately captured blood glucose trends, closely aligning with actual BG values and demonstrating remarkable performance.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 715-720"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ICT Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1