首页 > 最新文献

IEEE Access最新文献

英文 中文
Lightweight CNN-Based Intrusion Detection for CAN Bus Networks 基于cnn的CAN总线网络的轻量级入侵检测
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-15 DOI: 10.1109/ACCESS.2026.3654521
Thi-Thu-Huong Le;Andro Aprila Adiputra;Anak Agung Ngurah Dharmawangsa;Hyunjin Jang;Howon Kim
The Controller Area Network (CAN) bus plays a key role in keeping vehicles safe by enabling critical systems to communicate with each other. However, because it does not have its own security features, the CAN bus is open to cyber threats. A CAN bus intrusion detection system (IDS) is critical for automotive cybersecurity. This has made it especially important to create IDS that are not just accurate but also efficient enough to run on the limited hardware of Electronic Control Units (ECUs). Unfortunately, many current deep learning solutions for CAN intrusion detection use large and complex models that are too demanding for most automotive systems. Moreover, existing deep learning approaches need excessive computational resources that are unsuitable for resource-constrained ECUs. We propose TinyCNNCANNet, an ultra-lightweight convolutional neural network with just 13K parameters, designed to provide low-latency and resource-efficient CAN intrusion detection under experimental settings. Rather than focusing on on-vehicle deployment, this work evaluates the feasibility of lightweight CNN architectures for future real-time capable CAN intrusion detection. We comprehensively evaluate TinyCNNCANNet on four diverse datasets: CANFD 2021, CICIoV 2024, Multi-Fuzzer-CAN 2025, and SynCAN 2025. These datasets encompass nine attack types. TinyCNNCANNet achieves competitive or superior performance compared to models with 115- $300times $ more parameters. All architectures detect volume-based attacks (DoS, flooding, and fuzzing) most effectively. Sophisticated attacks (malfunction and fuzzer variants) challenge all models to a similar degree, regardless of complexity. TinyCNNCANNet shows superior generalization on synthetic out-of-distribution data (SynCAN 2025). It achieves 100% accuracy, while EfficientCANNet (86.82%) and MobileNetCANNet (59.33%) fail, revealing overfitting vulnerabilities in complex models. TinyCNNCANNet delivers 12- $20times $ faster inference (0.16-0.51 ms vs. 2.14-4.15 ms) and a 145- $383times $ smaller model size (0.04 MB vs. 5.81-15.32 MB). These results demonstrate the potential of TinyCNNCANNet for real-time capable CAN intrusion detection and indicate its suitability for future deployment on embedded automotive platforms.
控制器区域网络(CAN)总线通过使关键系统能够相互通信,在保证车辆安全方面发挥着关键作用。然而,由于CAN总线没有自己的安全特性,它很容易受到网络威胁。CAN总线入侵检测系统(IDS)对于汽车网络安全至关重要。这使得创建不仅准确而且足够高效的IDS在有限的电子控制单元(ecu)硬件上运行变得尤为重要。不幸的是,目前许多用于CAN入侵检测的深度学习解决方案都使用了大型复杂的模型,这对于大多数汽车系统来说要求太高。此外,现有的深度学习方法需要过多的计算资源,不适合资源受限的ecu。我们提出了一个只有13K个参数的超轻量级卷积神经网络TinyCNNCANNet,用于在实验设置下提供低延迟和资源高效的CAN入侵检测。这项工作不是专注于车载部署,而是评估轻量级CNN架构的可行性,以实现未来实时CAN入侵检测。我们在四个不同的数据集上对TinyCNNCANNet进行了综合评估:CANFD 2021、CICIoV 2024、Multi-Fuzzer-CAN 2025和SynCAN 2025。这些数据集包含九种攻击类型。与参数多115- 300倍的模型相比,TinyCNNCANNet实现了具有竞争力或优越的性能。所有架构都能最有效地检测基于卷的攻击(DoS、泛洪攻击和模糊攻击)。复杂的攻击(故障和fuzzer变体)对所有模型的挑战程度相似,无论其复杂性如何。TinyCNNCANNet在合成分布外数据(SynCAN 2025)上表现出优越的泛化能力。它达到了100%的准确率,而效率cannet(86.82%)和MobileNetCANNet(59.33%)失败,揭示了复杂模型的过拟合漏洞。TinyCNNCANNet的推理速度提高了12- 20倍(0.16-0.51 ms vs. 2.14-4.15 ms),模型尺寸缩小了145- 383倍(0.04 MB vs. 5.81-15.32 MB)。这些结果证明了TinyCNNCANNet在实时CAN入侵检测方面的潜力,并表明其未来在嵌入式汽车平台上部署的适用性。
{"title":"Lightweight CNN-Based Intrusion Detection for CAN Bus Networks","authors":"Thi-Thu-Huong Le;Andro Aprila Adiputra;Anak Agung Ngurah Dharmawangsa;Hyunjin Jang;Howon Kim","doi":"10.1109/ACCESS.2026.3654521","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654521","url":null,"abstract":"The Controller Area Network (CAN) bus plays a key role in keeping vehicles safe by enabling critical systems to communicate with each other. However, because it does not have its own security features, the CAN bus is open to cyber threats. A CAN bus intrusion detection system (IDS) is critical for automotive cybersecurity. This has made it especially important to create IDS that are not just accurate but also efficient enough to run on the limited hardware of Electronic Control Units (ECUs). Unfortunately, many current deep learning solutions for CAN intrusion detection use large and complex models that are too demanding for most automotive systems. Moreover, existing deep learning approaches need excessive computational resources that are unsuitable for resource-constrained ECUs. We propose TinyCNNCANNet, an ultra-lightweight convolutional neural network with just 13K parameters, designed to provide low-latency and resource-efficient CAN intrusion detection under experimental settings. Rather than focusing on on-vehicle deployment, this work evaluates the feasibility of lightweight CNN architectures for future real-time capable CAN intrusion detection. We comprehensively evaluate TinyCNNCANNet on four diverse datasets: CANFD 2021, CICIoV 2024, Multi-Fuzzer-CAN 2025, and SynCAN 2025. These datasets encompass nine attack types. TinyCNNCANNet achieves competitive or superior performance compared to models with 115-<inline-formula> <tex-math>$300times $ </tex-math></inline-formula> more parameters. All architectures detect volume-based attacks (DoS, flooding, and fuzzing) most effectively. Sophisticated attacks (malfunction and fuzzer variants) challenge all models to a similar degree, regardless of complexity. TinyCNNCANNet shows superior generalization on synthetic out-of-distribution data (SynCAN 2025). It achieves 100% accuracy, while EfficientCANNet (86.82%) and MobileNetCANNet (59.33%) fail, revealing overfitting vulnerabilities in complex models. TinyCNNCANNet delivers 12-<inline-formula> <tex-math>$20times $ </tex-math></inline-formula> faster inference (0.16-0.51 ms vs. 2.14-4.15 ms) and a 145-<inline-formula> <tex-math>$383times $ </tex-math></inline-formula> smaller model size (0.04 MB vs. 5.81-15.32 MB). These results demonstrate the potential of TinyCNNCANNet for real-time capable CAN intrusion detection and indicate its suitability for future deployment on embedded automotive platforms.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"14870-14891"},"PeriodicalIF":3.6,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11355494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Clique and Star Expansions in Hypergraph Representation Learning: Observations, Problems, and Solutions 重访超图表示学习中的团团和星形展开:观察、问题和解决方案
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-15 DOI: 10.1109/ACCESS.2026.3654644
David Yoon Suk Kang;Eujeanne Kim;Kyungsik Han;Sang-Wook Kim
Hypergraph representation learning has gained increasing attention for modeling higher-order relationships beyond pairwise interactions. Among existing approaches, clique expansion-based (CE-based) and star expansion-based (SE-based) methods are two dominant paradigms, yet their fundamental limitations remain underexplored. In this paper, we analyze CE- and SE-based methods and identify two complementary issues: CE-based methods suffer from over-agglomeration, where node representations in overlapping hyperedges become excessively clustered, while SE-based methods exhibit under-agglomeration, failing to sufficiently aggregate nodes within the same hyperedge. To address these issues, we propose $textsf {STARGCN}$ , a hypergraph representation learning framework that constructs a bipartite graph via star expansion and employs a graph convolutional network with a tuplewise loss to explicitly enforce appropriate aggregation and separation of node representations. Experiments on seven real-world hypergraph datasets demonstrate that $textsf {STARGCN}$ consistently and significantly outperforms five state-of-the-art CE- and SE-based methods across all datasets, achieving performance gains of up to 13.2% in accuracy and 10.2% in F1-score over the strongest baseline.
超图表示学习在两两交互之外的高阶关系建模方面得到了越来越多的关注。在现有的方法中,基于团扩展的方法(CE-based)和基于星扩展的方法(SE-based)是两种主要的范式,但它们的基本局限性尚未得到充分的探讨。在本文中,我们分析了基于CE和基于se的方法,并确定了两个互补的问题:基于CE的方法存在过度集聚问题,即重叠超边缘中的节点表示变得过度聚集,而基于se的方法存在不足集聚问题,未能充分聚集同一超边缘中的节点。为了解决这些问题,我们提出了$textsf {STARGCN}$,这是一个超图表示学习框架,它通过星形展开构建一个二部图,并使用具有元组损失的图卷积网络来显式地强制节点表示的适当聚合和分离。在七个真实世界的超图数据集上进行的实验表明,$textsf {STARGCN}$在所有数据集上的表现都一致且显著优于五种最先进的基于CE和se的方法,在最强基线上实现了高达13.2%的准确性和10.2%的f1分数的性能提升。
{"title":"Revisiting Clique and Star Expansions in Hypergraph Representation Learning: Observations, Problems, and Solutions","authors":"David Yoon Suk Kang;Eujeanne Kim;Kyungsik Han;Sang-Wook Kim","doi":"10.1109/ACCESS.2026.3654644","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654644","url":null,"abstract":"Hypergraph representation learning has gained increasing attention for modeling higher-order relationships beyond pairwise interactions. Among existing approaches, clique expansion-based (CE-based) and star expansion-based (SE-based) methods are two dominant paradigms, yet their fundamental limitations remain underexplored. In this paper, we analyze CE- and SE-based methods and identify two complementary issues: CE-based methods suffer from over-agglomeration, where node representations in overlapping hyperedges become excessively clustered, while SE-based methods exhibit under-agglomeration, failing to sufficiently aggregate nodes within the same hyperedge. To address these issues, we propose <inline-formula> <tex-math>$textsf {STARGCN}$ </tex-math></inline-formula>, a hypergraph representation learning framework that constructs a bipartite graph via star expansion and employs a graph convolutional network with a tuplewise loss to explicitly enforce appropriate aggregation and separation of node representations. Experiments on seven real-world hypergraph datasets demonstrate that <inline-formula> <tex-math>$textsf {STARGCN}$ </tex-math></inline-formula> consistently and significantly outperforms five state-of-the-art CE- and SE-based methods across all datasets, achieving performance gains of up to 13.2% in accuracy and 10.2% in F1-score over the strongest baseline.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10797-10810"},"PeriodicalIF":3.6,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11354166","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Training-Free Proxy-Guided Bayesian NAS for UAV-Constrained TinyML 无人机约束下的无训练代理引导贝叶斯NAS
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-14 DOI: 10.1109/ACCESS.2026.3654275
Parthiva Yadlapalli;Rishi Raj;Dayananda Pruthviraja
Neural Architecture Search (NAS) has emerged as a powerful paradigm for automating model design, yet most existing approaches remain training-intensive and computationally prohibitive. In resource-constrained domains such as UAV-based perception and Tiny Machine Learning (TinyML), performing repeated training or fine-tuning during search is infeasible due to strict compute, memory, and energy limitations. We propose a Proxy-Guided Bayesian Optimization NAS framework that eliminates all training during search by modeling a fused set of trainability proxies (e.g., SynFlow, Jacobian covariance, Neural Tangent Kernel) and hardware proxies (e.g., FLOPs, parameters, latency) within a unified Bayesian surrogate. This surrogate enables uncertainty-aware exploration directly under device-level constraints, guiding the search toward architectures that are both efficient and deployable. Unlike conventional NAS pipelines that demand extensive GPU-time for accuracy evaluations, our method completes the entire search on NATS-Bench (TSS) in only ~0.8 GPU-hours—achieving a top-1 accuracy of 93.25% with 2.10M parameters, 110M FLOPs, and 0.80 ms latency. This corresponds to an order-of-magnitude reduction in search cost compared to accuracy-driven baselines such as REA and BOHB, while preserving accuracy and satisfying all TinyML deployment budgets ( $P_{max }$ , $F_{max }$ , $L_{max }$ ). By coupling hardware-awareness with training-free optimization, the proposed approach bridges the gap between proxy-based NAS and real-world, energy-efficient deployment for UAV and edge intelligence applications.
神经结构搜索(NAS)已经成为自动化模型设计的一个强大范例,然而大多数现有的方法仍然是训练密集型的,并且在计算上令人望而却步。在资源受限的领域,如基于无人机的感知和微型机器学习(TinyML),由于严格的计算、内存和能量限制,在搜索过程中进行重复训练或微调是不可实现的。我们提出了一个代理引导的贝叶斯优化NAS框架,通过在统一的贝叶斯代理中建模一组融合的可训练性代理(例如,SynFlow,雅可比协方差,神经切线内核)和硬件代理(例如,FLOPs,参数,延迟)来消除搜索过程中的所有训练。该代理支持直接在设备级约束下进行不确定性感知的探索,指导对高效且可部署的体系结构的搜索。与需要大量gpu时间进行精度评估的传统NAS管道不同,我们的方法仅在约0.8 gpu小时内完成了NATS-Bench (TSS)上的整个搜索,在2.1 m参数,110M FLOPs和0.80 ms延迟的情况下实现了93.25%的顶级精度。与精度驱动的基线(如REA和BOHB)相比,这相当于搜索成本的数量级降低,同时保持准确性并满足所有TinyML部署预算($P_{max}$, $F_{max}$, $L_{max}$)。通过将硬件感知与无需训练的优化相结合,所提出的方法弥合了基于代理的NAS与现实世界中无人机和边缘智能应用的节能部署之间的差距。
{"title":"Training-Free Proxy-Guided Bayesian NAS for UAV-Constrained TinyML","authors":"Parthiva Yadlapalli;Rishi Raj;Dayananda Pruthviraja","doi":"10.1109/ACCESS.2026.3654275","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654275","url":null,"abstract":"Neural Architecture Search (NAS) has emerged as a powerful paradigm for automating model design, yet most existing approaches remain training-intensive and computationally prohibitive. In resource-constrained domains such as UAV-based perception and Tiny Machine Learning (TinyML), performing repeated training or fine-tuning during search is infeasible due to strict compute, memory, and energy limitations. We propose a Proxy-Guided Bayesian Optimization NAS framework that eliminates all training during search by modeling a fused set of trainability proxies (e.g., SynFlow, Jacobian covariance, Neural Tangent Kernel) and hardware proxies (e.g., FLOPs, parameters, latency) within a unified Bayesian surrogate. This surrogate enables uncertainty-aware exploration directly under device-level constraints, guiding the search toward architectures that are both efficient and deployable. Unlike conventional NAS pipelines that demand extensive GPU-time for accuracy evaluations, our method completes the entire search on NATS-Bench (TSS) in only ~0.8 GPU-hours—achieving a top-1 accuracy of 93.25% with 2.10M parameters, 110M FLOPs, and 0.80 ms latency. This corresponds to an order-of-magnitude reduction in search cost compared to accuracy-driven baselines such as REA and BOHB, while preserving accuracy and satisfying all TinyML deployment budgets (<inline-formula> <tex-math>$P_{max }$ </tex-math></inline-formula>, <inline-formula> <tex-math>$F_{max }$ </tex-math></inline-formula>, <inline-formula> <tex-math>$L_{max }$ </tex-math></inline-formula>). By coupling hardware-awareness with training-free optimization, the proposed approach bridges the gap between proxy-based NAS and real-world, energy-efficient deployment for UAV and edge intelligence applications.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10654-10666"},"PeriodicalIF":3.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensor-Free Occupancy Forecasting for Smart Buildings: A Wi-Fi Syslog Approach With Machine and Deep Learning 智能建筑的无传感器占用预测:基于机器和深度学习的Wi-Fi Syslog方法
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-13 DOI: 10.1109/ACCESS.2026.3654007
Shadi Banitaan;Taher El Taher;Khalid Aldamasi;Hassan Hassoun;Shoaib Ahmed
Accurate short-term occupancy forecasting is essential for smart building operations such as energy management, space utilization, safety, and facility planning. However, many existing solutions rely on dedicated sensors that increase deployment cost and operational complexity and limit scalability. This paper proposes a sensor-free occupancy forecasting framework that utilizes Wi-Fi syslog data already generated by enterprise networks. The study uses two real-world datasets derived from campus and office building Wi-Fi infrastructures and evaluates several machine learning models, including Random Forest, Decision Tree, Gradient Boosting, and a Long Short-Term Memory (LSTM) network, for multi-step forecasting at a 5-minute resolution. Experimental results show that Random Forest achieves the highest accuracy, with Coefficient of Determination ( $R^{2}$ ) values of up to 0.997 and consistently low mean absolute error (MAE) and root mean squared error (RMSE), while LSTM provides competitive performance for short and medium forecasting horizons. Extended horizon experiments show that LSTM-based forecasts stay reliable for look-ahead periods of up to 60 minutes, while longer horizons show increased sensitivity to temporal variability and pattern changes. We also show that using only a small number of features is adequate to achieve high prediction accuracy, which simplifies data preparation and supports real-time deployment. The evaluation also examines cross-zone and cross-building generalization and demonstrates that short-term adaptation enables robust deployment across heterogeneous environments with limited retraining overhead. The proposed framework is integrated into an interactive dashboard to support visualization and decision-making. Overall, the results indicate that Wi-Fi syslog-based occupancy forecasting is a practical, scalable, and privacy-preserving approach for smart building management.
准确的短期入住率预测对于智能建筑运营(如能源管理、空间利用、安全和设施规划)至关重要。然而,许多现有的解决方案依赖于专用传感器,这增加了部署成本和操作复杂性,并限制了可扩展性。本文提出了一种利用企业网络已经生成的Wi-Fi syslog数据的无传感器占用预测框架。该研究使用了来自校园和办公楼Wi-Fi基础设施的两个真实数据集,并评估了几种机器学习模型,包括随机森林、决策树、梯度增强和长短期记忆(LSTM)网络,用于以5分钟分辨率进行多步预测。实验结果表明,随机森林的预测精度最高,其决定系数($R^{2}$)值高达0.997,平均绝对误差(MAE)和均方根误差(RMSE)始终保持在较低水平,而LSTM在中短期预测方面具有竞争力。扩展视界试验表明,基于lstm的预报在60分钟内保持可靠,而更长的视界对时间变率和模式变化的敏感性增加。我们还表明,仅使用少量特征就足以达到较高的预测精度,从而简化了数据准备并支持实时部署。评估还检查了跨区域和跨构建的泛化,并证明了短期适应能够在有限的再培训开销下实现跨异构环境的健壮部署。建议的框架被集成到一个交互式仪表板中,以支持可视化和决策。总体而言,研究结果表明,基于Wi-Fi系统日志的入住率预测是一种实用的、可扩展的、保护隐私的智能建筑管理方法。
{"title":"Sensor-Free Occupancy Forecasting for Smart Buildings: A Wi-Fi Syslog Approach With Machine and Deep Learning","authors":"Shadi Banitaan;Taher El Taher;Khalid Aldamasi;Hassan Hassoun;Shoaib Ahmed","doi":"10.1109/ACCESS.2026.3654007","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654007","url":null,"abstract":"Accurate short-term occupancy forecasting is essential for smart building operations such as energy management, space utilization, safety, and facility planning. However, many existing solutions rely on dedicated sensors that increase deployment cost and operational complexity and limit scalability. This paper proposes a sensor-free occupancy forecasting framework that utilizes Wi-Fi syslog data already generated by enterprise networks. The study uses two real-world datasets derived from campus and office building Wi-Fi infrastructures and evaluates several machine learning models, including Random Forest, Decision Tree, Gradient Boosting, and a Long Short-Term Memory (LSTM) network, for multi-step forecasting at a 5-minute resolution. Experimental results show that Random Forest achieves the highest accuracy, with Coefficient of Determination (<inline-formula> <tex-math>$R^{2}$ </tex-math></inline-formula>) values of up to 0.997 and consistently low mean absolute error (MAE) and root mean squared error (RMSE), while LSTM provides competitive performance for short and medium forecasting horizons. Extended horizon experiments show that LSTM-based forecasts stay reliable for look-ahead periods of up to 60 minutes, while longer horizons show increased sensitivity to temporal variability and pattern changes. We also show that using only a small number of features is adequate to achieve high prediction accuracy, which simplifies data preparation and supports real-time deployment. The evaluation also examines cross-zone and cross-building generalization and demonstrates that short-term adaptation enables robust deployment across heterogeneous environments with limited retraining overhead. The proposed framework is integrated into an interactive dashboard to support visualization and decision-making. Overall, the results indicate that Wi-Fi syslog-based occupancy forecasting is a practical, scalable, and privacy-preserving approach for smart building management.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10891-10909"},"PeriodicalIF":3.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348122","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structured and Unstructured Speech2Action Frameworks for Human–Robot Collaboration: A User Study 人机协作的结构化和非结构化语音和动作框架:用户研究
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-13 DOI: 10.1109/ACCESS.2026.3653715
Krishna Kodur;Manizheh Zand;Matthew Tognotti;Cinthya Járegui;Maria Kyrarini
Practical and intuitive communication remains a critical challenge in Human-Robot Collaboration, particularly within domestic environments. Conventional systems typically rely on structured (scripted) speech inputs, which may limit natural interaction and accessibility. This study evaluates user preferences and system usability between structured and unstructured (conversational) speech modalities in a collaborative cooking scenario using a mobile manipulator robot. Thirty adult participants engaged in tasks involving both communication modes, during which the frequency and impact of robot execution errors were also assessed. The proposed Speech2Action framework integrates Google Cloud Speech-to-Text, BERT, and GPT-Neo models for intent recognition and command generation, combined with ROS-based motion control for object retrieval. Usability and perception were analyzed using System Usability Scale (SUS) and Human–Robot Collaboration Questionnaire (HRCQ) metrics through paired t-tests and correlation analyses. Results show a preference for unstructured speech (p = 0.0032) with higher SUS scores, while robot execution errors affected perceived safety but not overall usability, consistent with the Pratfall Effect. The findings inform the design of natural, robust, and user-centric speech interfaces for collaborative robots.
实用和直观的沟通仍然是人机协作的关键挑战,特别是在家庭环境中。传统系统通常依赖于结构化(脚本化)的语音输入,这可能会限制自然交互和可访问性。本研究评估了用户偏好和系统可用性之间的结构化和非结构化(会话)语音模式的协作烹饪场景中使用移动机械手机器人。30名成年参与者参与了涉及两种通信模式的任务,在此期间,机器人执行错误的频率和影响也被评估。提出的Speech2Action框架集成了谷歌云语音到文本、BERT和GPT-Neo模型,用于意图识别和命令生成,结合基于ros的运动控制用于对象检索。通过配对t检验和相关分析,采用系统可用性量表(SUS)和人机协作问卷(HRCQ)对可用性和感知进行分析。结果显示,SUS得分较高的人更喜欢非结构化语音(p = 0.0032),而机器人执行错误会影响感知安全性,但不会影响整体可用性,这与“失态效应”(Pratfall Effect)一致。研究结果为协作机器人设计自然、健壮和以用户为中心的语音界面提供了信息。
{"title":"Structured and Unstructured Speech2Action Frameworks for Human–Robot Collaboration: A User Study","authors":"Krishna Kodur;Manizheh Zand;Matthew Tognotti;Cinthya Járegui;Maria Kyrarini","doi":"10.1109/ACCESS.2026.3653715","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3653715","url":null,"abstract":"Practical and intuitive communication remains a critical challenge in Human-Robot Collaboration, particularly within domestic environments. Conventional systems typically rely on structured (scripted) speech inputs, which may limit natural interaction and accessibility. This study evaluates user preferences and system usability between structured and unstructured (conversational) speech modalities in a collaborative cooking scenario using a mobile manipulator robot. Thirty adult participants engaged in tasks involving both communication modes, during which the frequency and impact of robot execution errors were also assessed. The proposed Speech2Action framework integrates Google Cloud Speech-to-Text, BERT, and GPT-Neo models for intent recognition and command generation, combined with ROS-based motion control for object retrieval. Usability and perception were analyzed using System Usability Scale (SUS) and Human–Robot Collaboration Questionnaire (HRCQ) metrics through paired t-tests and correlation analyses. Results show a preference for unstructured speech (p = 0.0032) with higher SUS scores, while robot execution errors affected perceived safety but not overall usability, consistent with the Pratfall Effect. The findings inform the design of natural, robust, and user-centric speech interfaces for collaborative robots.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10782-10796"},"PeriodicalIF":3.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructing Identity-Based Revocation Schemes for Efficient Generation of Ciphertexts 构建基于身份的有效生成密文的撤销方案
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1109/ACCESS.2026.3651866
Jung Yeon Hwang;Jong Hwan Park
Asymmetric broadcast encryption (ABE) allows a sender, given the public keys or identities of recipients, to encrypt a message such that only an authorized subset of users can decrypt it. In fully asymmetric settings, where any user may act as a sender, ciphertext generation time and ciphertext size become critical performance metrics. However, most existing ABE schemes impose substantial sender-side computational costs and scale poorly with system size. This paper presents new ABE constructions that achieve fast ciphertext generation while maintaining compact ciphertexts. Our schemes are built upon the identity-based revocation (IBR) framework, enabling each user’s identity to serve directly as a public key. We first propose a basic IBR scheme that produces constant-size ciphertexts independent of the number of recipients or revoked users, achieving efficient encryption through optimized hash-to-point and aggregation techniques. We then extend this design to a tree-based construction that supports large-scale systems and offers a practical trade-off among encryption cost, decryption efficiency, and secret-key size. Both schemes are proven CPA-secure under a modified Decisional Bilinear Diffie–Hellman (mDBDH) assumption in the random-oracle model. Extensive experiments with concrete parameters demonstrate that our schemes significantly outperform existing asymmetric revocation approaches. For a system with $10^{6}$ users and a revocation rate of 1.5–3%, prior schemes require tens of seconds to generate a ciphertext, whereas our constructions complete encryption within 1.6 seconds while keeping the ciphertext size nearly constant (below $10^{2}$  KB).
非对称广播加密(ABE)允许发送方在给定公钥或接收方身份的情况下对消息进行加密,这样只有经过授权的用户子集才能解密消息。在完全不对称的设置中,任何用户都可能充当发送者,密文生成时间和密文大小成为关键的性能指标。然而,大多数现有的ABE方案都施加了大量的发送端计算成本,并且随着系统规模的增加而扩展性很差。本文提出了一种新的ABE结构,可以在保持密文简洁的同时实现快速的密文生成。我们的方案建立在基于身份的撤销(IBR)框架之上,使每个用户的身份可以直接用作公钥。我们首先提出了一个基本的IBR方案,该方案产生恒定大小的密文,与收件人或被撤销用户的数量无关,通过优化的哈希点和聚合技术实现有效的加密。然后,我们将此设计扩展为支持大规模系统的基于树的结构,并在加密成本、解密效率和秘钥大小之间提供了实际的权衡。在随机预测模型中,在一个改进的决策双线性Diffie-Hellman (mDBDH)假设下,证明了两种方案的cpa安全。具体参数的大量实验表明,我们的方案明显优于现有的不对称撤销方法。对于一个拥有$10^{6}$用户和1.5-3%撤销率的系统,以前的方案需要几十秒来生成密文,而我们的结构在1.6秒内完成加密,同时保持密文大小几乎不变(低于$10^{2}$ KB)。
{"title":"Constructing Identity-Based Revocation Schemes for Efficient Generation of Ciphertexts","authors":"Jung Yeon Hwang;Jong Hwan Park","doi":"10.1109/ACCESS.2026.3651866","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651866","url":null,"abstract":"Asymmetric broadcast encryption (ABE) allows a sender, given the public keys or identities of recipients, to encrypt a message such that only an authorized subset of users can decrypt it. In fully asymmetric settings, where any user may act as a sender, ciphertext generation time and ciphertext size become critical performance metrics. However, most existing ABE schemes impose substantial sender-side computational costs and scale poorly with system size. This paper presents new ABE constructions that achieve fast ciphertext generation while maintaining compact ciphertexts. Our schemes are built upon the identity-based revocation (IBR) framework, enabling each user’s identity to serve directly as a public key. We first propose a basic IBR scheme that produces constant-size ciphertexts independent of the number of recipients or revoked users, achieving efficient encryption through optimized hash-to-point and aggregation techniques. We then extend this design to a tree-based construction that supports large-scale systems and offers a practical trade-off among encryption cost, decryption efficiency, and secret-key size. Both schemes are proven CPA-secure under a modified Decisional Bilinear Diffie–Hellman (mDBDH) assumption in the random-oracle model. Extensive experiments with concrete parameters demonstrate that our schemes significantly outperform existing asymmetric revocation approaches. For a system with <inline-formula> <tex-math>$10^{6}$ </tex-math></inline-formula> users and a revocation rate of 1.5–3%, prior schemes require tens of seconds to generate a ciphertext, whereas our constructions complete encryption within 1.6 seconds while keeping the ciphertext size nearly constant (below <inline-formula> <tex-math>$10^{2}$ </tex-math></inline-formula> KB).","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7730-7743"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Fixed Switching Frequency Model Predictive Control for Half-Bridge Inverter 半桥逆变器的固定开关频率模型预测控制
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1109/ACCESS.2026.3653467
Xizi Chen;Shin Kawai;Triet Nguyen-van
Finite control set model predictive control has garnered significant attention in inverter control due to its compatibility with the discrete nature of power electronic systems. However, a major limitation of normal finite control set model predictive control is its inherently variable switching frequency, which fluctuates with operating conditions and system parameters. This variability poses challenges for inverter performance, complicates filter design, and increases harmonic distortion. To address these issues, this paper proposes a fixed switching frequency model predictive control method, which is based on an adaptive bandwidth-based control method that enables fixed switching frequency operation while preserving the core advantages of finite control set model predictive control. The proposed approach derives a dynamic relationship between the switching frequency and system voltage, allowing the controller to maintain a desired frequency without modifying the cost function. Simulation studies on a single-phase half-bridge inverter demonstrate that the method effectively stabilizes the switching frequency, maintains robustness against parameter variations, and achieves reliable tracking performance.
有限控制集模型预测控制由于能够适应电力电子系统的离散特性,在逆变器控制中受到了广泛的关注。然而,常规有限控制集模型预测控制的一个主要限制是其固有的可变开关频率,它会随着运行条件和系统参数的变化而波动。这种可变性对逆变器性能提出了挑战,使滤波器设计复杂化,并增加了谐波失真。针对这些问题,本文提出了一种固定开关频率模型预测控制方法,该方法基于基于带宽的自适应控制方法,在保留有限控制集模型预测控制的核心优势的同时,实现了固定开关频率的运行。该方法推导出开关频率和系统电压之间的动态关系,允许控制器在不修改成本函数的情况下保持所需的频率。对单相半桥逆变器的仿真研究表明,该方法能有效地稳定开关频率,保持对参数变化的鲁棒性,获得可靠的跟踪性能。
{"title":"A Fixed Switching Frequency Model Predictive Control for Half-Bridge Inverter","authors":"Xizi Chen;Shin Kawai;Triet Nguyen-van","doi":"10.1109/ACCESS.2026.3653467","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3653467","url":null,"abstract":"Finite control set model predictive control has garnered significant attention in inverter control due to its compatibility with the discrete nature of power electronic systems. However, a major limitation of normal finite control set model predictive control is its inherently variable switching frequency, which fluctuates with operating conditions and system parameters. This variability poses challenges for inverter performance, complicates filter design, and increases harmonic distortion. To address these issues, this paper proposes a fixed switching frequency model predictive control method, which is based on an adaptive bandwidth-based control method that enables fixed switching frequency operation while preserving the core advantages of finite control set model predictive control. The proposed approach derives a dynamic relationship between the switching frequency and system voltage, allowing the controller to maintain a desired frequency without modifying the cost function. Simulation studies on a single-phase half-bridge inverter demonstrate that the method effectively stabilizes the switching frequency, maintains robustness against parameter variations, and achieves reliable tracking performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7895-7906"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HG-MRLS: A Hierarchical Graph Meta-Reinforcement Learning Framework for Dynamic Scheduling in Heterogeneous Computing Networks HG-MRLS:异构计算网络动态调度的层次图元强化学习框架
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1109/ACCESS.2026.3651796
Lin Jiaxin;Huang Yuetian;Li Ruiqi;Qin Qiang;Huang Hanye
Heterogeneous computing networks formed by modern cloud and edge collaboration pose dynamic resource scheduling as a key bottleneck for system performance. Rule-based traditional methods falter in handling environmental shifts, while most current deep reinforcement learning schedulers use single-decision architectures, struggling to balance global optimization with local real-time responses. Addressing this core issue, this study proposes a novel hierarchical graph meta-reinforcement learning scheduler (HG-MRLS). The framework employs a dual-layer decision mechanism, decomposing complex scheduling into macro strategic planning and micro tactical execution. At the upper layer, we designed a graph attention network-based macro scheduler that learns forward-looking resource allocation via encoding global topology and dynamic features. At the lower layer, we built a meta-reinforcement learning-based micro scheduler for rapid self-adaptation in unknown local environments. To validate effectiveness, we set up a simulation platform based on real workloads. Extensive experiments show HG-MRLS outperforms various classic heuristics and state-of-the-art deep RL methods in key metrics like average job completion time, deadline satisfaction rate, and resource utilization. Especially under high loads and sudden dynamic events, the framework exhibits outstanding stability and environmental adaptability. Its core scientific contribution is in demonstrating a new paradigm for resolving the inherent tension between global strategic planning and local real-time responses in complex, dynamic systems, offering a scalable blueprint for multi-scale intelligent control.
现代云和边缘协作形成的异构计算网络,使动态资源调度成为制约系统性能的关键瓶颈。基于规则的传统方法在处理环境变化方面步履维艰,而目前大多数深度强化学习调度程序使用单一决策架构,难以平衡全局优化与局部实时响应。针对这一核心问题,本研究提出了一种新的分层图元强化学习调度器(HG-MRLS)。该框架采用双层决策机制,将复杂调度分解为宏观战略规划和微观战术执行。在上层,我们设计了一个基于图注意力网络的宏调度程序,通过编码全局拓扑和动态特征来学习前瞻性资源分配。在底层,我们构建了一个基于元强化学习的微调度器,用于在未知的局部环境中快速自适应。为了验证有效性,我们建立了一个基于实际工作负载的仿真平台。大量实验表明,HG-MRLS在平均任务完成时间、截止日期满意率和资源利用率等关键指标上优于各种经典启发式方法和最先进的深度强化学习方法。特别是在高负荷和突发动态事件下,该框架具有出色的稳定性和环境适应性。其核心科学贡献是展示了解决复杂动态系统中全球战略规划和局部实时响应之间固有紧张关系的新范式,为多尺度智能控制提供了可扩展的蓝图。
{"title":"HG-MRLS: A Hierarchical Graph Meta-Reinforcement Learning Framework for Dynamic Scheduling in Heterogeneous Computing Networks","authors":"Lin Jiaxin;Huang Yuetian;Li Ruiqi;Qin Qiang;Huang Hanye","doi":"10.1109/ACCESS.2026.3651796","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651796","url":null,"abstract":"Heterogeneous computing networks formed by modern cloud and edge collaboration pose dynamic resource scheduling as a key bottleneck for system performance. Rule-based traditional methods falter in handling environmental shifts, while most current deep reinforcement learning schedulers use single-decision architectures, struggling to balance global optimization with local real-time responses. Addressing this core issue, this study proposes a novel hierarchical graph meta-reinforcement learning scheduler (HG-MRLS). The framework employs a dual-layer decision mechanism, decomposing complex scheduling into macro strategic planning and micro tactical execution. At the upper layer, we designed a graph attention network-based macro scheduler that learns forward-looking resource allocation via encoding global topology and dynamic features. At the lower layer, we built a meta-reinforcement learning-based micro scheduler for rapid self-adaptation in unknown local environments. To validate effectiveness, we set up a simulation platform based on real workloads. Extensive experiments show HG-MRLS outperforms various classic heuristics and state-of-the-art deep RL methods in key metrics like average job completion time, deadline satisfaction rate, and resource utilization. Especially under high loads and sudden dynamic events, the framework exhibits outstanding stability and environmental adaptability. Its core scientific contribution is in demonstrating a new paradigm for resolving the inherent tension between global strategic planning and local real-time responses in complex, dynamic systems, offering a scalable blueprint for multi-scale intelligent control.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7792-7811"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339505","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early Fault Detection and Diagnosis in Industrial Machines: An Approach Using Digital Twins 工业机械早期故障检测与诊断:一种基于数字孪生的方法
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1109/ACCESS.2026.3651993
Hugo Leite;Dênis Leite;Diego Rativa
The emergence of Industry 4.0 has catalyzed the integration of advanced technologies to enhance manufacturing efficiency, reliability, and competitiveness. Fault Detection and Diagnosis (FDD) systems are critical for minimizing downtime and ensuring operational continuity. This research investigates the integration of Digital Twin (DT) technology with Machine Learning (ML) models for real-time fault detection and diagnosis (RT-FDD) in discrete manufacturing machines. Two industrial systems—a Pick-and-Place machine and a Furnace—were modeled using linear and non-linear models to develop Digital Twins. The method is validated on a simulated environment replicating real industrial behavior, combining DT-generated features with conventional real-time process data. The proposed approach improved F1 scores by up to 11% and demonstrated enhanced robustness in both inter-cycle and intra-cycle fault detection tasks. Notably, for the Furnace machine, the method enabled fault detection 40% earlier in the cycle while maintaining the same F1 Score of 94%, and provided reliable diagnosis with an F1 Score of 80% at only 15% of cycle completion. A comprehensive evaluation of 16 ML algorithms highlighted the effectiveness of DT features in boosting predictive performance.
工业4.0的出现促进了先进技术的整合,以提高制造效率、可靠性和竞争力。故障检测和诊断(FDD)系统对于最大限度地减少停机时间和确保运营连续性至关重要。本研究探讨了离散制造机器中实时故障检测和诊断(RT-FDD)的数字孪生(DT)技术与机器学习(ML)模型的集成。两个工业系统——一个拾取机和一个熔炉——使用线性和非线性模型建模来开发数字双胞胎。该方法在复制真实工业行为的模拟环境中进行了验证,将dt生成的特征与传统的实时过程数据相结合。该方法将F1分数提高了11%,并在循环间和周期内的故障检测任务中表现出增强的鲁棒性。值得注意的是,对于Furnace机器,该方法使故障检测在周期早期达到40%,同时保持相同的F1分数为94%,并且仅在周期完成的15%时提供F1分数为80%的可靠诊断。对16个ML算法的综合评估强调了DT特征在提高预测性能方面的有效性。
{"title":"Early Fault Detection and Diagnosis in Industrial Machines: An Approach Using Digital Twins","authors":"Hugo Leite;Dênis Leite;Diego Rativa","doi":"10.1109/ACCESS.2026.3651993","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651993","url":null,"abstract":"The emergence of Industry 4.0 has catalyzed the integration of advanced technologies to enhance manufacturing efficiency, reliability, and competitiveness. Fault Detection and Diagnosis (FDD) systems are critical for minimizing downtime and ensuring operational continuity. This research investigates the integration of Digital Twin (DT) technology with Machine Learning (ML) models for real-time fault detection and diagnosis (RT-FDD) in discrete manufacturing machines. Two industrial systems—a Pick-and-Place machine and a Furnace—were modeled using linear and non-linear models to develop Digital Twins. The method is validated on a simulated environment replicating real industrial behavior, combining DT-generated features with conventional real-time process data. The proposed approach improved F1 scores by up to 11% and demonstrated enhanced robustness in both inter-cycle and intra-cycle fault detection tasks. Notably, for the Furnace machine, the method enabled fault detection 40% earlier in the cycle while maintaining the same F1 Score of 94%, and provided reliable diagnosis with an F1 Score of 80% at only 15% of cycle completion. A comprehensive evaluation of 16 ML algorithms highlighted the effectiveness of DT features in boosting predictive performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7829-7840"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11338757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDChat: Toward a Universal, Multi-Biometric Digital Identity for Next-Generation Secure Communication in ASEAN IDChat:面向东盟下一代安全通信的通用、多重生物识别数字身份
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1109/ACCESS.2025.3645071
Ben Rahman;Maryani;Thoyyibah T.
IDChat (Internet-Dependent Cryptographic Hybrid Authentication Technology) is a universal digital identity framework designed to eliminate costly and vulnerable SMS-based one-time passwords (OTPs). It fuses civil identity (NIK), multi-biometric modalities (fingerprint and retina), and genomic-derived entropy to reinforce cryptographic key generation under a unified fusion engine. Targeting Indonesia’s 120 million internet users—90% of whom still rely on paid SMS OTPs—IDChat introduces a privacy-preserving Digital Genetic Signature (DGS) generated via SHA-256 hashing and homomorphic encryption (BFV scheme), enhancing resistance to spoofing and brute-force attacks. Experimental validation using OpenCV with FVC2004 and CASIA-Iris datasets achieved 99.1% authentication accuracy, a false acceptance rate (FAR) of 0.008%, and an average latency of 1.9 seconds, demonstrating competitive efficiency against existing biometric systems. A comparative cost analysis indicates potential national savings of Rp 12–50 trillion annually by replacing SMS OTPs with free Wi-Fi–based verification. Unlike centralized frameworks such as Aadhaar or FIDO2, IDChat performs local (offline) verification within closed Wi-Fi environments through cached encrypted templates, ensuring independence from cellular networks. DNA information functions as a static entropy factor enrolled once during registration, avoiding any real-time biological sampling. This study presents the first technically validated multi-biometric and DNA-derived cryptographic fusion model optimized for secure, inclusive, and cost-efficient digital authentication in resource-constrained environments.
IDChat (Internet-Dependent Cryptographic Hybrid Authentication Technology)是一种通用的数字身份框架,旨在消除昂贵且易受攻击的基于短信的一次性密码(otp)。它融合了公民身份(NIK)、多生物识别模式(指纹和视网膜)和基因组衍生熵,以加强统一融合引擎下的加密密钥生成。针对印尼的1.2亿互联网用户(其中90%仍然依赖付费短信otps), idchat引入了一种通过SHA-256散列和同态加密(BFV方案)生成的保护隐私的数字遗传签名(DGS),增强了对欺骗和暴力攻击的抵抗力。在FVC2004和CASIA-Iris数据集上使用OpenCV进行实验验证,验证准确率为99.1%,错误接受率(FAR)为0.008%,平均延迟为1.9秒,与现有生物识别系统相比具有竞争力。一项比较成本分析表明,通过用免费的基于wi - fi的验证取代短信OTPs,每年可能节省12-50万亿卢比。与Aadhaar或FIDO2等集中式框架不同,IDChat通过缓存加密模板在封闭的Wi-Fi环境中执行本地(离线)验证,确保独立于蜂窝网络。DNA信息作为静态熵因子在登记过程中登记一次,避免任何实时生物采样。本研究提出了第一个经过技术验证的多生物识别和dna衍生密码融合模型,该模型针对资源受限环境中的安全、包容和经济高效的数字身份验证进行了优化。
{"title":"IDChat: Toward a Universal, Multi-Biometric Digital Identity for Next-Generation Secure Communication in ASEAN","authors":"Ben Rahman;Maryani;Thoyyibah T.","doi":"10.1109/ACCESS.2025.3645071","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3645071","url":null,"abstract":"IDChat (Internet-Dependent Cryptographic Hybrid Authentication Technology) is a universal digital identity framework designed to eliminate costly and vulnerable SMS-based one-time passwords (OTPs). It fuses civil identity (NIK), multi-biometric modalities (fingerprint and retina), and genomic-derived entropy to reinforce cryptographic key generation under a unified fusion engine. Targeting Indonesia’s 120 million internet users—90% of whom still rely on paid SMS OTPs—IDChat introduces a privacy-preserving Digital Genetic Signature (DGS) generated via SHA-256 hashing and homomorphic encryption (BFV scheme), enhancing resistance to spoofing and brute-force attacks. Experimental validation using OpenCV with FVC2004 and CASIA-Iris datasets achieved 99.1% authentication accuracy, a false acceptance rate (FAR) of 0.008%, and an average latency of 1.9 seconds, demonstrating competitive efficiency against existing biometric systems. A comparative cost analysis indicates potential national savings of Rp 12–50 trillion annually by replacing SMS OTPs with free Wi-Fi–based verification. Unlike centralized frameworks such as Aadhaar or FIDO2, IDChat performs local (offline) verification within closed Wi-Fi environments through cached encrypted templates, ensuring independence from cellular networks. DNA information functions as a static entropy factor enrolled once during registration, avoiding any real-time biological sampling. This study presents the first technically validated multi-biometric and DNA-derived cryptographic fusion model optimized for secure, inclusive, and cost-efficient digital authentication in resource-constrained environments.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"14892-14902"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11343738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Access
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1