首页 > 最新文献

ICT Express最新文献

英文 中文
EDAS: Effective Data Augmentation Strategies for test-time adaptation EDAS:测试时间适应的有效数据增强策略
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-08-06 DOI: 10.1016/j.icte.2025.07.011
Mansoo Jung , Sunbeom Jeong , Youngwook Kim , Jungwoo Lee
Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.
测试时间自适应(TTA)是一种在推理过程中仅使用未标记的测试数据更新模型参数的方法。与提供标签的监督学习不同,由于使用原始样本和增强样本的预测之间存在差异,数据增强在TTA设置中可能无法有效地发挥作用。我们通过引入一种新方法来解决这一限制,该方法采用为每个转换定制的具有不同适应策略的选择增强。我们的方法被设计为一个插件解决方案,可以很容易地集成到现有的方法中。大量的实验表明,在各种具有挑战性的场景下,我们的方法优于ImageNet-C、VisDA2021和ImageNet-Sketch数据集中现有的基线。
{"title":"EDAS: Effective Data Augmentation Strategies for test-time adaptation","authors":"Mansoo Jung ,&nbsp;Sunbeom Jeong ,&nbsp;Youngwook Kim ,&nbsp;Jungwoo Lee","doi":"10.1016/j.icte.2025.07.011","DOIUrl":"10.1016/j.icte.2025.07.011","url":null,"abstract":"<div><div>Test-time adaptation (TTA) is a method of updating model parameters during inference using only unlabeled test data. Unlike supervised learning where labels are provided, data augmentation may not function effectively in TTA settings due to discrepancies between predictions using original and augmented samples. We address this limitation by introducing a novel approach that employs selected augmentations with distinct adaptation strategies customized for each transformation. Our approach is designed as a plug-in solution that can easily be integrated into existing methods. Extensive experiments demonstrate that our approach outperforms existing baselines in the ImageNet-C, VisDA2021, and ImageNet-Sketch dataset under various challenging scenarios.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 888-893"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based diabetic retinopathy recognition and grading: Challenges, gaps, and an improved approach — A survey 基于深度学习的糖尿病视网膜病变识别和分级:挑战、差距和改进的方法-一项调查
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-08-09 DOI: 10.1016/j.icte.2025.08.001
Md Ilias Bappi , Jannat Afrin Juthy , Kyungbaek Kim
Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness worldwide. Early diagnosis is crucial for preventing irreversible vision loss, but manual screening methods are time-consuming and often inconsistent. Deep learning (DL) techniques have shown promise in automating DR detection; however, many existing models still struggle to capture subtle lesions and distinguish fine-grained severity stages. In this survey, we comprehensively review recent DL-based approaches for DR classification, emphasizing attention mechanisms, feature fusion strategies, and stage-wise grading. To address current gaps, we propose a hybrid taxonomy that identifies effective combinations such as texture-based attention, CNN-Transformer fusion, and multi-modal integration. Additionally, we validate our previously published model, STMFNet, a spatial texture-aware attention network based on EfficientNet, across four benchmark datasets. On EyePACS and Messidor, STMFNet achieves up to 98.10% accuracy, outperforming several state-of-the-art (SOTA) models under similar settings. This study provides both a consolidated overview of DR detection advancements and a practical benchmark framework to guide future research in AI-assisted DR classification.
糖尿病视网膜病变(DR)是世界范围内视力损害和失明的主要原因。早期诊断对于防止不可逆的视力丧失至关重要,但人工筛查方法耗时且往往不一致。深度学习(DL)技术在自动化DR检测方面显示出了前景;然而,许多现有的模型仍然难以捕捉细微的病变并区分细粒度的严重程度阶段。在这项调查中,我们全面回顾了最近基于dl的DR分类方法,强调了注意机制、特征融合策略和阶段分级。为了解决目前的差距,我们提出了一种混合分类法,可以识别有效的组合,如基于纹理的注意力、CNN-Transformer融合和多模态集成。此外,我们在四个基准数据集上验证了我们之前发布的模型STMFNet,这是一个基于effentnet的空间纹理感知注意力网络。在EyePACS和Messidor上,STMFNet的准确率高达98.10%,在类似设置下优于几种最先进的(SOTA)模型。本研究提供了DR检测进展的综合概述,并提供了一个实用的基准框架,以指导ai辅助DR分类的未来研究。
{"title":"Deep learning-based diabetic retinopathy recognition and grading: Challenges, gaps, and an improved approach — A survey","authors":"Md Ilias Bappi ,&nbsp;Jannat Afrin Juthy ,&nbsp;Kyungbaek Kim","doi":"10.1016/j.icte.2025.08.001","DOIUrl":"10.1016/j.icte.2025.08.001","url":null,"abstract":"<div><div>Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness worldwide. Early diagnosis is crucial for preventing irreversible vision loss, but manual screening methods are time-consuming and often inconsistent. Deep learning (DL) techniques have shown promise in automating DR detection; however, many existing models still struggle to capture subtle lesions and distinguish fine-grained severity stages. In this survey, we comprehensively review recent DL-based approaches for DR classification, emphasizing attention mechanisms, feature fusion strategies, and stage-wise grading. To address current gaps, we propose a hybrid taxonomy that identifies effective combinations such as texture-based attention, CNN-Transformer fusion, and multi-modal integration. Additionally, we validate our previously published model, STMFNet, a spatial texture-aware attention network based on EfficientNet, across four benchmark datasets. On EyePACS and Messidor, STMFNet achieves up to 98.10% accuracy, outperforming several state-of-the-art (SOTA) models under similar settings. This study provides both a consolidated overview of DR detection advancements and a practical benchmark framework to guide future research in AI-assisted DR classification.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 993-1013"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A lightweight remote sensing image fusion method for vehicle perception 一种用于车辆感知的轻型遥感图像融合方法
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-06-29 DOI: 10.1016/j.icte.2025.06.012
Yangyang Zhao , Jiannan Su , Wenjun Li , Zhiyong Yu , Xiaowei Dai
Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.
2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
遥感图像融合是增强遥感图像信息的重要手段。然而,现有融合技术在计算资源和存储容量方面的局限性使得实时处理变得困难。为此,提出了一种基于知识蒸馏的轻型汽车遥感图像融合方法。利用知识蒸馏技术将复杂的教师模型知识转移到轻量级的学生模型中,在保持较高融合精度的同时显著降低了模型复杂度。实验结果表明,该方法在无人机数据集上表现良好,模型权值仅为0.641M.2025韩国通信与信息科学研究所。这是一篇基于CC by-nc-nd许可(http://creativecommons.org/licenses/by-nc-nd/4.0/)的开放获取文章。
{"title":"A lightweight remote sensing image fusion method for vehicle perception","authors":"Yangyang Zhao ,&nbsp;Jiannan Su ,&nbsp;Wenjun Li ,&nbsp;Zhiyong Yu ,&nbsp;Xiaowei Dai","doi":"10.1016/j.icte.2025.06.012","DOIUrl":"10.1016/j.icte.2025.06.012","url":null,"abstract":"<div><div>Remote sensing image fusion plays a crucial role in enhancing image information. However, the limitations of existing fusion technologies in terms of computational resources and storage capacity make real-time processing difficult. Therefore, a lightweight fusion method based on knowledge distillation is proposed for vehicle remote sensing image fusion. The knowledge distillation technology is used to transfer the complex teacher model knowledge to the lightweight student model, which realizes the significant reduction of model complexity while maintaining high fusion accuracy. Experimental results show that the proposed method performs well on DroneVehicle dataset and the model weight is only 0.641M.</div><div>2025 The Korean Institute of Communications and Information Sciences. Publishing Services by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (<span><span>http://creativecommons.org/licenses/by-nc-nd/4.0/</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 933-938"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural-NGBoost: Natural gradient boosting with neural network base learners neural - ngboost:基于神经网络学习器的自然梯度增强
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-08-19 DOI: 10.1016/j.icte.2025.08.003
Jamshidjon Ganiev , Deok-Woong Kim , Seung-Hwan Bae
NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.
NGBoost在概率和点估计任务中显示出了令人鼓舞的结果。然而,由于该方法的基础学习器是基于决策树的,因此该方法是否可以扩展到神经结构系统中还不清楚。为了解决这个问题,我们设计了一个neural - ngboost框架,用轻量级神经网络取代基础学习器,并引入联合梯度估计用于提升过程。在自然梯度增强的基础上,通过推断自然梯度迭代更新神经学习器,并根据其概率分布更新参数得分。实验结果表明,与其他增强方法相比,Neural-NGBoost在各种数据集上都取得了更好的性能。
{"title":"Neural-NGBoost: Natural gradient boosting with neural network base learners","authors":"Jamshidjon Ganiev ,&nbsp;Deok-Woong Kim ,&nbsp;Seung-Hwan Bae","doi":"10.1016/j.icte.2025.08.003","DOIUrl":"10.1016/j.icte.2025.08.003","url":null,"abstract":"<div><div>NGBoost has shown promising results in probabilistic and point estimation tasks. However, it is vague still whether this method can be scalable to neural architecture system since its base learner is based on decision trees. To resolve this, we design a Neural-NGBoost framework by replacing the base learner with lightweight neural networks and introducing joint gradient estimation for boosting procedure. Based on natural gradient boosting, we iteratively update the neural based learner by inferring natural gradient and update the parameter score with its probabilistic distribution. Experimental results show Neural-NGBoost achieves superior performance across various datasets compared to other boosting methods.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 974-980"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity 深度q -学习入侵检测系统(DQ-IDS):一种新的自适应和自学习网络安全强化学习方法
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-05-18 DOI: 10.1016/j.icte.2025.05.007
Md. Alamgir Hossain
With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.
随着网络威胁的日益复杂,传统的入侵检测系统(IDS)往往无法适应不断变化的攻击模式,导致高误报率和对零日攻击的检测不足。本研究提出深度q -学习入侵检测系统(DQ-IDS),这是一种基于强化学习(RL)的新型方法,旨在动态学习网络攻击行为并不断提高检测性能。与传统的机器学习(ML)和基于深度学习(DL)的IDS模型依赖于静态的预训练分类器不同,DQ-IDS采用深度q网络(DQN),具有经验回放和自适应贪婪探索功能,可以自主对良性和恶意网络流量进行分类。经验回放的整合减轻了灾难性遗忘,而自适应探索确保了学习效率和威胁检测之间的最佳权衡。奖励驱动的培训机制加强了正确的分类并惩罚错误,从而降低了假阳性和假阴性率。对真实网络数据集的大量实证评估表明,DQ-IDS的检测准确率达到97.18%,在攻击检测和计算效率方面都明显优于传统的IDS解决方案。这项工作引入了一种向自适应、自我学习的网络安全系统的范式转变,该系统能够在动态网络环境中实时、强大地缓解威胁。
{"title":"Deep Q-learning intrusion detection system (DQ-IDS): A novel reinforcement learning approach for adaptive and self-learning cybersecurity","authors":"Md. Alamgir Hossain","doi":"10.1016/j.icte.2025.05.007","DOIUrl":"10.1016/j.icte.2025.05.007","url":null,"abstract":"<div><div>With the increasing sophistication of cyber threats, traditional Intrusion Detection Systems (IDS) often fail to adapt to evolving attack patterns, leading to high false positive rates and inadequate detection of zero-day attacks. This study proposes the Deep Q-Learning Intrusion Detection System (DQ-IDS), a novel reinforcement learning (RL)-based approach designed to dynamically learn network attack behaviors and continuously enhance detection performance. Unlike conventional machine learning (ML) and deep learning (DL)-based IDS models that depend on static, pre-trained classifiers, DQ-IDS employs Deep Q-Networks (DQN) with experience replay and adaptive ε-greedy exploration to autonomously classify benign and malicious network traffic. The integration of experience replay mitigates catastrophic forgetting, while adaptive exploration ensures an optimal trade-off between learning efficiency and threat detection. A reward-driven training mechanism reinforces correct classifications and penalizes errors, thereby reducing both false positive and false negative rates. Extensive empirical evaluations on real-world network datasets demonstrate that DQ-IDS achieves a detection accuracy of 97.18%, significantly outperforming conventional IDS solutions in both attack detection and computational efficiency. This work introduces a paradigm shift toward adaptive, self-learning cybersecurity systems capable of real-time, robust threat mitigation in dynamic network environments.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 875-880"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber 基于人工智能的梯度折射率光纤折射率分布预测
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-10-01 Epub Date: 2025-06-04 DOI: 10.1016/j.icte.2025.05.011
Seung-Yeol Lee , Hyuntai Kim
This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.
提出了一种基于深度神经网络(DNN)的梯度折射率多模光纤折射率预测方法。该模型使用模拟数据进行训练,并在选择(或结构化)和随机测试集中实现了小于1%的平均损失。这种人工智能驱动的方法在定制光纤设计、非线性光学和快速光纤性能表征方面具有潜在的应用前景。未来的发展可能包括使用实际数据和扩展模型来预测折射率分布,进一步增强其通用性。
{"title":"Artificial intelligence based prediction of refractive index profile of graded refractive index optical fiber","authors":"Seung-Yeol Lee ,&nbsp;Hyuntai Kim","doi":"10.1016/j.icte.2025.05.011","DOIUrl":"10.1016/j.icte.2025.05.011","url":null,"abstract":"<div><div>This research presents a deep neural network (DNN) approach for predicting the refractive index profile in graded-index multimode fibers (GRIN MMFs). The model was trained using simulated data and achieved an average loss less than 1% across both selected (or structured) and random test sets. This artificial intelligence-driven approach has potential applications in custom fiber design, nonlinear optics, and rapid fiber performance characterization. Future developments may include the use of real-world data and the extension of the model to predict refractive index profiles, further enhancing its versatility.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 5","pages":"Pages 870-874"},"PeriodicalIF":4.2,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications 自动驾驶车辆中的多目标检测与跟踪:增强关联计算及其多模态应用综述
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 Epub Date: 2025-06-21 DOI: 10.1016/j.icte.2025.06.005
Muhammad Adeel Altaf , Min Young Kim
Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.
三维(3D)目标跟踪在计算机视觉应用中至关重要,特别是在自动驾驶、机器人和监视中。尽管取得了进步,但有效利用多模态数据来改进多目标检测和跟踪(MODT)仍然具有挑战性。本研究介绍了ACMODT,一种基于亲和计算的多目标检测和跟踪框架,它集成了摄像头(2D)和激光雷达(3D)数据,以增强自动驾驶中的MODT性能。该方法利用EPNet作为主干,利用2D-3D特征融合来准确生成提案。深度神经网络(DNN)提取鲁棒的外观和几何特征,而改进的亲和计算模块结合了精炼升压相关特征(RBCF)和3d扩展几何IoU (3D-XGIoU)进行精确的对象关联。运动预测使用卡尔曼滤波(KF)进行细化,基于高斯混合模型(GMM)的数据关联确保一致的跟踪。在用于定量分析的KITTI汽车跟踪基准和用于可视化的辐射数据集上进行的实验表明,与最先进的多目标跟踪(MOT)方法相比,我们的方法具有更高的跟踪精度和精度,证明了其实时目标跟踪的有效性。
{"title":"Multiple object detection and tracking in autonomous vehicles: A survey on enhanced affinity computation and its multimodal applications","authors":"Muhammad Adeel Altaf ,&nbsp;Min Young Kim","doi":"10.1016/j.icte.2025.06.005","DOIUrl":"10.1016/j.icte.2025.06.005","url":null,"abstract":"<div><div>Three-dimensional (3D) object tracking is crucial in computer vision applications, particularly in autonomous driving, robotics, and surveillance. Despite advancements, effectively utilizing multimodal data to improve multi-object detection and tracking (MODT) remains challenging. This study introduces ACMODT, an affinity computation-based multi-object detection and tracking framework that integrates camera (2D) and LiDAR (3D) data for enhanced MODT performance in autonomous driving. This approach leverages EPNet as a backbone, utilizing 2D–3D feature fusion for accurate proposal generation. A deep neural network (DNN) extracts robust appearance and geometric features, while an improved affinity computation module combines Refined Boost Correlation Features (RBCF) and 3D-Extended Geometric IoU (3D-XGIoU) for precise object association. Motion prediction is refined using a Kalman filter (KF), and Gaussian Mixture Model (GMM)-based data association ensures consistent tracking. Experiments on the KITTI car tracking benchmark for quantitative analysis and the RADIATE dataset for visualization demonstrate that our method achieves superior tracking accuracy and precision compared to state-of-the-art multi-object tracking (MOT) approaches, proving its effectiveness for real-time object tracking.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 809-818"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review 基于人工智能的电动汽车电池健康状态和剩余使用寿命评估综述
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 Epub Date: 2025-06-28 DOI: 10.1016/j.icte.2025.05.013
Md Shahriar Nazim , Arbil Chakma , Md. Ibne Joha, Syed Samiul Alam, Md Minhazur Rahman, Miftahul Khoir Shilahul Umam, Yeong Min Jang
Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.
锂离子电池对电动汽车(ev)至关重要,但会随着时间的推移而退化,需要准确的健康状态(SOH)和剩余使用寿命(RUL)估计。本文综述了最近基于人工智能的方法,特别是卷积和循环神经网络,它们在预测方面的有效性。它讨论了关键的优化策略,如特征选择、参数调整和迁移学习。对公共数据集(NASA, CALCE, Oxford)进行基准评估。本文还评估了模型的复杂性、性能指标和部署挑战。最后,它概述了改进电池管理系统的未来方向,以支持更高效、可靠和可扩展的集成到现实世界的电动汽车应用中。
{"title":"Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review","authors":"Md Shahriar Nazim ,&nbsp;Arbil Chakma ,&nbsp;Md. Ibne Joha,&nbsp;Syed Samiul Alam,&nbsp;Md Minhazur Rahman,&nbsp;Miftahul Khoir Shilahul Umam,&nbsp;Yeong Min Jang","doi":"10.1016/j.icte.2025.05.013","DOIUrl":"10.1016/j.icte.2025.05.013","url":null,"abstract":"<div><div>Lithium-ion batteries are critical to electric vehicles (EVs) but degrade over time, requiring accurate State of Health (SOH) and Remaining Useful Life (RUL) estimation. This review examines recent AI-based methods, especially Convolutional and Recurrent Neural Networks, for their effectiveness in prediction. It discusses key optimization strategies such as feature selection, parameter tuning, and transfer learning. Public datasets (NASA, CALCE, Oxford) are evaluated for benchmarking. The paper also assesses model complexity, performance metrics, and deployment challenges. Finally, it outlines future directions for improving battery management systems, supporting more efficient, reliable, and scalable integration into real-world EV applications.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 769-789"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory 增强数据采集系统:使用排队理论的云边缘传感器网络的性能量化
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 Epub Date: 2025-05-22 DOI: 10.1016/j.icte.2025.04.017
Jose Wanderlei Rocha , Eder Gomes , Vandirleya Barbosa , Arthur Sabino , Luiz Nelson Lima , Gustavo Callou , Francisco Airton Silva , Eunmi Choi , Tuan Anh Nguyen , Dugki Min , Jae-Woo Lee
This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.
本研究利用M/M/c/K排队理论,探讨一种云端边缘感测器基础架构,以分析农业资料系统的效能。它侧重于优化数据处理,并评估系统配置对性能的影响。该模型显著提高了效率和可伸缩性,最大限度地减少了对大量物理基础设施的需求。分析显示,两层的利用率都超过90%,突出了该模型对各种物联网应用的适用性。M/M/c/K排队模型解决了农业云边缘传感器网络的可扩展性和实时数据处理挑战,改进了缺乏动态可扩展性的传统方法。该模型旨在优化资源利用和减少数据处理延迟,在精准农业中被证明是至关重要的,在精准农业中,及时的数据对决策至关重要。它的多功能性扩展到各种需要高效实时分析和资源管理的农业应用。
{"title":"Enhancing data harvesting systems: Performance quantification of Cloud–Edge-sensor networks using queueing theory","authors":"Jose Wanderlei Rocha ,&nbsp;Eder Gomes ,&nbsp;Vandirleya Barbosa ,&nbsp;Arthur Sabino ,&nbsp;Luiz Nelson Lima ,&nbsp;Gustavo Callou ,&nbsp;Francisco Airton Silva ,&nbsp;Eunmi Choi ,&nbsp;Tuan Anh Nguyen ,&nbsp;Dugki Min ,&nbsp;Jae-Woo Lee","doi":"10.1016/j.icte.2025.04.017","DOIUrl":"10.1016/j.icte.2025.04.017","url":null,"abstract":"<div><div>This study investigates a Cloud–Edge-sensors infrastructure using M/M/c/K queuing theory to analyze agricultural data systems’ performance. It focuses on optimizing data handling and evaluates the system configuration impacts on performance. The model significantly enhances efficiency and scalability, minimizing the need for extensive physical infrastructure. Analysis shows over 90% utilization in both layers, highlighting the model’s applicability to various IoT applications. The M/M/c/K queuing model addresses scalability and real-time data processing challenges in agricultural cloud–edge-sensor networks, improving over traditional methods lacking dynamic scalability. Designed for optimized resource use and reduced data handling delays, this model proves crucial in precision agriculture, where timely data is essential for decision-making. Its versatility extends to various agricultural applications requiring efficient real-time analysis and resource management.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 597-602"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The journey to cloud as a continuum: Opportunities, challenges, and research directions 云之旅是一个连续体:机遇、挑战和研究方向
IF 4.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-08-01 Epub Date: 2025-05-08 DOI: 10.1016/j.icte.2025.04.015
Md. Mahmodul Hasan , Tangina Sultana , Md. Delowar Hossain , Ashis Kumar Mandal , Thien-Thu Ngo , Ga-Won Lee , Eui-Nam Huh
The rapid development of the Internet of Things (IoT) has driven a significant shift in computing architectures, leading to the rise of the cloud continuum—a flexible framework that combines cloud services with edge and fog computing. While existing survey papers have contributed valuable insights, they often focus narrowly on specific aspects of the continuum or do not fully address its evolving complexities. These limitations underscore the need for a comprehensive and up-to-date analysis of the field. This study bridges these gaps by presenting an extensive review of the cloud continuum, covering its role in enhancing resource management, improving real-time data processing, integrating machine learning approaches, and optimizing user experiences across diverse applications. We examine how edge devices, fog nodes, and cloud infrastructures synergize to enable decentralized data processing, reducing latency in critical areas such as smart cities, healthcare, and autonomous vehicles. Additionally, this study explores the integration of machine learning across edge, fog, and cloud layers, with a focus on inference and distributed learning methods. By highlighting how these technologies enhance efficiency, scalability, and intelligent decision-making, this review provides a holistic perspective on the cloud continuum. Our analysis offers valuable insights into future research directions, emphasizing innovations that can drive next-generation computing systems toward greater efficiency and adaptability.
物联网(IoT)的快速发展推动了计算架构的重大转变,导致了云连续体的兴起,这是一种将云服务与边缘计算和雾计算相结合的灵活框架。虽然现有的调查论文提供了有价值的见解,但它们往往狭隘地集中在连续体的特定方面,或者没有充分处理其不断发展的复杂性。这些限制强调了对该领域进行全面和最新分析的必要性。本研究通过对云连续体的广泛回顾,涵盖其在加强资源管理、改进实时数据处理、集成机器学习方法和优化不同应用程序的用户体验方面的作用,弥合了这些差距。我们将研究边缘设备、雾节点和云基础设施如何协同作用,以实现分散的数据处理,减少智能城市、医疗保健和自动驾驶汽车等关键领域的延迟。此外,本研究还探讨了跨边缘、雾层和云层的机器学习集成,重点是推理和分布式学习方法。通过强调这些技术如何提高效率、可伸缩性和智能决策,本文提供了对云连续体的整体看法。我们的分析为未来的研究方向提供了有价值的见解,强调了能够推动下一代计算系统走向更高效率和适应性的创新。
{"title":"The journey to cloud as a continuum: Opportunities, challenges, and research directions","authors":"Md. Mahmodul Hasan ,&nbsp;Tangina Sultana ,&nbsp;Md. Delowar Hossain ,&nbsp;Ashis Kumar Mandal ,&nbsp;Thien-Thu Ngo ,&nbsp;Ga-Won Lee ,&nbsp;Eui-Nam Huh","doi":"10.1016/j.icte.2025.04.015","DOIUrl":"10.1016/j.icte.2025.04.015","url":null,"abstract":"<div><div>The rapid development of the Internet of Things (IoT) has driven a significant shift in computing architectures, leading to the rise of the cloud continuum—a flexible framework that combines cloud services with edge and fog computing. While existing survey papers have contributed valuable insights, they often focus narrowly on specific aspects of the continuum or do not fully address its evolving complexities. These limitations underscore the need for a comprehensive and up-to-date analysis of the field. This study bridges these gaps by presenting an extensive review of the cloud continuum, covering its role in enhancing resource management, improving real-time data processing, integrating machine learning approaches, and optimizing user experiences across diverse applications. We examine how edge devices, fog nodes, and cloud infrastructures synergize to enable decentralized data processing, reducing latency in critical areas such as smart cities, healthcare, and autonomous vehicles. Additionally, this study explores the integration of machine learning across edge, fog, and cloud layers, with a focus on inference and distributed learning methods. By highlighting how these technologies enhance efficiency, scalability, and intelligent decision-making, this review provides a holistic perspective on the cloud continuum. Our analysis offers valuable insights into future research directions, emphasizing innovations that can drive next-generation computing systems toward greater efficiency and adaptability.</div></div>","PeriodicalId":48526,"journal":{"name":"ICT Express","volume":"11 4","pages":"Pages 666-689"},"PeriodicalIF":4.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144840806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ICT Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1