Pub Date : 2026-01-15DOI: 10.1109/ACCESS.2026.3654521
Thi-Thu-Huong Le;Andro Aprila Adiputra;Anak Agung Ngurah Dharmawangsa;Hyunjin Jang;Howon Kim
The Controller Area Network (CAN) bus plays a key role in keeping vehicles safe by enabling critical systems to communicate with each other. However, because it does not have its own security features, the CAN bus is open to cyber threats. A CAN bus intrusion detection system (IDS) is critical for automotive cybersecurity. This has made it especially important to create IDS that are not just accurate but also efficient enough to run on the limited hardware of Electronic Control Units (ECUs). Unfortunately, many current deep learning solutions for CAN intrusion detection use large and complex models that are too demanding for most automotive systems. Moreover, existing deep learning approaches need excessive computational resources that are unsuitable for resource-constrained ECUs. We propose TinyCNNCANNet, an ultra-lightweight convolutional neural network with just 13K parameters, designed to provide low-latency and resource-efficient CAN intrusion detection under experimental settings. Rather than focusing on on-vehicle deployment, this work evaluates the feasibility of lightweight CNN architectures for future real-time capable CAN intrusion detection. We comprehensively evaluate TinyCNNCANNet on four diverse datasets: CANFD 2021, CICIoV 2024, Multi-Fuzzer-CAN 2025, and SynCAN 2025. These datasets encompass nine attack types. TinyCNNCANNet achieves competitive or superior performance compared to models with 115-$300times $ more parameters. All architectures detect volume-based attacks (DoS, flooding, and fuzzing) most effectively. Sophisticated attacks (malfunction and fuzzer variants) challenge all models to a similar degree, regardless of complexity. TinyCNNCANNet shows superior generalization on synthetic out-of-distribution data (SynCAN 2025). It achieves 100% accuracy, while EfficientCANNet (86.82%) and MobileNetCANNet (59.33%) fail, revealing overfitting vulnerabilities in complex models. TinyCNNCANNet delivers 12-$20times $ faster inference (0.16-0.51 ms vs. 2.14-4.15 ms) and a 145-$383times $ smaller model size (0.04 MB vs. 5.81-15.32 MB). These results demonstrate the potential of TinyCNNCANNet for real-time capable CAN intrusion detection and indicate its suitability for future deployment on embedded automotive platforms.
控制器区域网络(CAN)总线通过使关键系统能够相互通信,在保证车辆安全方面发挥着关键作用。然而,由于CAN总线没有自己的安全特性,它很容易受到网络威胁。CAN总线入侵检测系统(IDS)对于汽车网络安全至关重要。这使得创建不仅准确而且足够高效的IDS在有限的电子控制单元(ecu)硬件上运行变得尤为重要。不幸的是,目前许多用于CAN入侵检测的深度学习解决方案都使用了大型复杂的模型,这对于大多数汽车系统来说要求太高。此外,现有的深度学习方法需要过多的计算资源,不适合资源受限的ecu。我们提出了一个只有13K个参数的超轻量级卷积神经网络TinyCNNCANNet,用于在实验设置下提供低延迟和资源高效的CAN入侵检测。这项工作不是专注于车载部署,而是评估轻量级CNN架构的可行性,以实现未来实时CAN入侵检测。我们在四个不同的数据集上对TinyCNNCANNet进行了综合评估:CANFD 2021、CICIoV 2024、Multi-Fuzzer-CAN 2025和SynCAN 2025。这些数据集包含九种攻击类型。与参数多115- 300倍的模型相比,TinyCNNCANNet实现了具有竞争力或优越的性能。所有架构都能最有效地检测基于卷的攻击(DoS、泛洪攻击和模糊攻击)。复杂的攻击(故障和fuzzer变体)对所有模型的挑战程度相似,无论其复杂性如何。TinyCNNCANNet在合成分布外数据(SynCAN 2025)上表现出优越的泛化能力。它达到了100%的准确率,而效率cannet(86.82%)和MobileNetCANNet(59.33%)失败,揭示了复杂模型的过拟合漏洞。TinyCNNCANNet的推理速度提高了12- 20倍(0.16-0.51 ms vs. 2.14-4.15 ms),模型尺寸缩小了145- 383倍(0.04 MB vs. 5.81-15.32 MB)。这些结果证明了TinyCNNCANNet在实时CAN入侵检测方面的潜力,并表明其未来在嵌入式汽车平台上部署的适用性。
{"title":"Lightweight CNN-Based Intrusion Detection for CAN Bus Networks","authors":"Thi-Thu-Huong Le;Andro Aprila Adiputra;Anak Agung Ngurah Dharmawangsa;Hyunjin Jang;Howon Kim","doi":"10.1109/ACCESS.2026.3654521","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654521","url":null,"abstract":"The Controller Area Network (CAN) bus plays a key role in keeping vehicles safe by enabling critical systems to communicate with each other. However, because it does not have its own security features, the CAN bus is open to cyber threats. A CAN bus intrusion detection system (IDS) is critical for automotive cybersecurity. This has made it especially important to create IDS that are not just accurate but also efficient enough to run on the limited hardware of Electronic Control Units (ECUs). Unfortunately, many current deep learning solutions for CAN intrusion detection use large and complex models that are too demanding for most automotive systems. Moreover, existing deep learning approaches need excessive computational resources that are unsuitable for resource-constrained ECUs. We propose TinyCNNCANNet, an ultra-lightweight convolutional neural network with just 13K parameters, designed to provide low-latency and resource-efficient CAN intrusion detection under experimental settings. Rather than focusing on on-vehicle deployment, this work evaluates the feasibility of lightweight CNN architectures for future real-time capable CAN intrusion detection. We comprehensively evaluate TinyCNNCANNet on four diverse datasets: CANFD 2021, CICIoV 2024, Multi-Fuzzer-CAN 2025, and SynCAN 2025. These datasets encompass nine attack types. TinyCNNCANNet achieves competitive or superior performance compared to models with 115-<inline-formula> <tex-math>$300times $ </tex-math></inline-formula> more parameters. All architectures detect volume-based attacks (DoS, flooding, and fuzzing) most effectively. Sophisticated attacks (malfunction and fuzzer variants) challenge all models to a similar degree, regardless of complexity. TinyCNNCANNet shows superior generalization on synthetic out-of-distribution data (SynCAN 2025). It achieves 100% accuracy, while EfficientCANNet (86.82%) and MobileNetCANNet (59.33%) fail, revealing overfitting vulnerabilities in complex models. TinyCNNCANNet delivers 12-<inline-formula> <tex-math>$20times $ </tex-math></inline-formula> faster inference (0.16-0.51 ms vs. 2.14-4.15 ms) and a 145-<inline-formula> <tex-math>$383times $ </tex-math></inline-formula> smaller model size (0.04 MB vs. 5.81-15.32 MB). These results demonstrate the potential of TinyCNNCANNet for real-time capable CAN intrusion detection and indicate its suitability for future deployment on embedded automotive platforms.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"14870-14891"},"PeriodicalIF":3.6,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11355494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/ACCESS.2026.3654644
David Yoon Suk Kang;Eujeanne Kim;Kyungsik Han;Sang-Wook Kim
Hypergraph representation learning has gained increasing attention for modeling higher-order relationships beyond pairwise interactions. Among existing approaches, clique expansion-based (CE-based) and star expansion-based (SE-based) methods are two dominant paradigms, yet their fundamental limitations remain underexplored. In this paper, we analyze CE- and SE-based methods and identify two complementary issues: CE-based methods suffer from over-agglomeration, where node representations in overlapping hyperedges become excessively clustered, while SE-based methods exhibit under-agglomeration, failing to sufficiently aggregate nodes within the same hyperedge. To address these issues, we propose $textsf {STARGCN}$ , a hypergraph representation learning framework that constructs a bipartite graph via star expansion and employs a graph convolutional network with a tuplewise loss to explicitly enforce appropriate aggregation and separation of node representations. Experiments on seven real-world hypergraph datasets demonstrate that $textsf {STARGCN}$ consistently and significantly outperforms five state-of-the-art CE- and SE-based methods across all datasets, achieving performance gains of up to 13.2% in accuracy and 10.2% in F1-score over the strongest baseline.
{"title":"Revisiting Clique and Star Expansions in Hypergraph Representation Learning: Observations, Problems, and Solutions","authors":"David Yoon Suk Kang;Eujeanne Kim;Kyungsik Han;Sang-Wook Kim","doi":"10.1109/ACCESS.2026.3654644","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654644","url":null,"abstract":"Hypergraph representation learning has gained increasing attention for modeling higher-order relationships beyond pairwise interactions. Among existing approaches, clique expansion-based (CE-based) and star expansion-based (SE-based) methods are two dominant paradigms, yet their fundamental limitations remain underexplored. In this paper, we analyze CE- and SE-based methods and identify two complementary issues: CE-based methods suffer from over-agglomeration, where node representations in overlapping hyperedges become excessively clustered, while SE-based methods exhibit under-agglomeration, failing to sufficiently aggregate nodes within the same hyperedge. To address these issues, we propose <inline-formula> <tex-math>$textsf {STARGCN}$ </tex-math></inline-formula>, a hypergraph representation learning framework that constructs a bipartite graph via star expansion and employs a graph convolutional network with a tuplewise loss to explicitly enforce appropriate aggregation and separation of node representations. Experiments on seven real-world hypergraph datasets demonstrate that <inline-formula> <tex-math>$textsf {STARGCN}$ </tex-math></inline-formula> consistently and significantly outperforms five state-of-the-art CE- and SE-based methods across all datasets, achieving performance gains of up to 13.2% in accuracy and 10.2% in F1-score over the strongest baseline.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10797-10810"},"PeriodicalIF":3.6,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11354166","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural Architecture Search (NAS) has emerged as a powerful paradigm for automating model design, yet most existing approaches remain training-intensive and computationally prohibitive. In resource-constrained domains such as UAV-based perception and Tiny Machine Learning (TinyML), performing repeated training or fine-tuning during search is infeasible due to strict compute, memory, and energy limitations. We propose a Proxy-Guided Bayesian Optimization NAS framework that eliminates all training during search by modeling a fused set of trainability proxies (e.g., SynFlow, Jacobian covariance, Neural Tangent Kernel) and hardware proxies (e.g., FLOPs, parameters, latency) within a unified Bayesian surrogate. This surrogate enables uncertainty-aware exploration directly under device-level constraints, guiding the search toward architectures that are both efficient and deployable. Unlike conventional NAS pipelines that demand extensive GPU-time for accuracy evaluations, our method completes the entire search on NATS-Bench (TSS) in only ~0.8 GPU-hours—achieving a top-1 accuracy of 93.25% with 2.10M parameters, 110M FLOPs, and 0.80 ms latency. This corresponds to an order-of-magnitude reduction in search cost compared to accuracy-driven baselines such as REA and BOHB, while preserving accuracy and satisfying all TinyML deployment budgets ($P_{max }$ , $F_{max }$ , $L_{max }$ ). By coupling hardware-awareness with training-free optimization, the proposed approach bridges the gap between proxy-based NAS and real-world, energy-efficient deployment for UAV and edge intelligence applications.
{"title":"Training-Free Proxy-Guided Bayesian NAS for UAV-Constrained TinyML","authors":"Parthiva Yadlapalli;Rishi Raj;Dayananda Pruthviraja","doi":"10.1109/ACCESS.2026.3654275","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654275","url":null,"abstract":"Neural Architecture Search (NAS) has emerged as a powerful paradigm for automating model design, yet most existing approaches remain training-intensive and computationally prohibitive. In resource-constrained domains such as UAV-based perception and Tiny Machine Learning (TinyML), performing repeated training or fine-tuning during search is infeasible due to strict compute, memory, and energy limitations. We propose a Proxy-Guided Bayesian Optimization NAS framework that eliminates all training during search by modeling a fused set of trainability proxies (e.g., SynFlow, Jacobian covariance, Neural Tangent Kernel) and hardware proxies (e.g., FLOPs, parameters, latency) within a unified Bayesian surrogate. This surrogate enables uncertainty-aware exploration directly under device-level constraints, guiding the search toward architectures that are both efficient and deployable. Unlike conventional NAS pipelines that demand extensive GPU-time for accuracy evaluations, our method completes the entire search on NATS-Bench (TSS) in only ~0.8 GPU-hours—achieving a top-1 accuracy of 93.25% with 2.10M parameters, 110M FLOPs, and 0.80 ms latency. This corresponds to an order-of-magnitude reduction in search cost compared to accuracy-driven baselines such as REA and BOHB, while preserving accuracy and satisfying all TinyML deployment budgets (<inline-formula> <tex-math>$P_{max }$ </tex-math></inline-formula>, <inline-formula> <tex-math>$F_{max }$ </tex-math></inline-formula>, <inline-formula> <tex-math>$L_{max }$ </tex-math></inline-formula>). By coupling hardware-awareness with training-free optimization, the proposed approach bridges the gap between proxy-based NAS and real-world, energy-efficient deployment for UAV and edge intelligence applications.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10654-10666"},"PeriodicalIF":3.6,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1109/ACCESS.2026.3654007
Shadi Banitaan;Taher El Taher;Khalid Aldamasi;Hassan Hassoun;Shoaib Ahmed
Accurate short-term occupancy forecasting is essential for smart building operations such as energy management, space utilization, safety, and facility planning. However, many existing solutions rely on dedicated sensors that increase deployment cost and operational complexity and limit scalability. This paper proposes a sensor-free occupancy forecasting framework that utilizes Wi-Fi syslog data already generated by enterprise networks. The study uses two real-world datasets derived from campus and office building Wi-Fi infrastructures and evaluates several machine learning models, including Random Forest, Decision Tree, Gradient Boosting, and a Long Short-Term Memory (LSTM) network, for multi-step forecasting at a 5-minute resolution. Experimental results show that Random Forest achieves the highest accuracy, with Coefficient of Determination ($R^{2}$ ) values of up to 0.997 and consistently low mean absolute error (MAE) and root mean squared error (RMSE), while LSTM provides competitive performance for short and medium forecasting horizons. Extended horizon experiments show that LSTM-based forecasts stay reliable for look-ahead periods of up to 60 minutes, while longer horizons show increased sensitivity to temporal variability and pattern changes. We also show that using only a small number of features is adequate to achieve high prediction accuracy, which simplifies data preparation and supports real-time deployment. The evaluation also examines cross-zone and cross-building generalization and demonstrates that short-term adaptation enables robust deployment across heterogeneous environments with limited retraining overhead. The proposed framework is integrated into an interactive dashboard to support visualization and decision-making. Overall, the results indicate that Wi-Fi syslog-based occupancy forecasting is a practical, scalable, and privacy-preserving approach for smart building management.
{"title":"Sensor-Free Occupancy Forecasting for Smart Buildings: A Wi-Fi Syslog Approach With Machine and Deep Learning","authors":"Shadi Banitaan;Taher El Taher;Khalid Aldamasi;Hassan Hassoun;Shoaib Ahmed","doi":"10.1109/ACCESS.2026.3654007","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3654007","url":null,"abstract":"Accurate short-term occupancy forecasting is essential for smart building operations such as energy management, space utilization, safety, and facility planning. However, many existing solutions rely on dedicated sensors that increase deployment cost and operational complexity and limit scalability. This paper proposes a sensor-free occupancy forecasting framework that utilizes Wi-Fi syslog data already generated by enterprise networks. The study uses two real-world datasets derived from campus and office building Wi-Fi infrastructures and evaluates several machine learning models, including Random Forest, Decision Tree, Gradient Boosting, and a Long Short-Term Memory (LSTM) network, for multi-step forecasting at a 5-minute resolution. Experimental results show that Random Forest achieves the highest accuracy, with Coefficient of Determination (<inline-formula> <tex-math>$R^{2}$ </tex-math></inline-formula>) values of up to 0.997 and consistently low mean absolute error (MAE) and root mean squared error (RMSE), while LSTM provides competitive performance for short and medium forecasting horizons. Extended horizon experiments show that LSTM-based forecasts stay reliable for look-ahead periods of up to 60 minutes, while longer horizons show increased sensitivity to temporal variability and pattern changes. We also show that using only a small number of features is adequate to achieve high prediction accuracy, which simplifies data preparation and supports real-time deployment. The evaluation also examines cross-zone and cross-building generalization and demonstrates that short-term adaptation enables robust deployment across heterogeneous environments with limited retraining overhead. The proposed framework is integrated into an interactive dashboard to support visualization and decision-making. Overall, the results indicate that Wi-Fi syslog-based occupancy forecasting is a practical, scalable, and privacy-preserving approach for smart building management.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10891-10909"},"PeriodicalIF":3.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348122","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Practical and intuitive communication remains a critical challenge in Human-Robot Collaboration, particularly within domestic environments. Conventional systems typically rely on structured (scripted) speech inputs, which may limit natural interaction and accessibility. This study evaluates user preferences and system usability between structured and unstructured (conversational) speech modalities in a collaborative cooking scenario using a mobile manipulator robot. Thirty adult participants engaged in tasks involving both communication modes, during which the frequency and impact of robot execution errors were also assessed. The proposed Speech2Action framework integrates Google Cloud Speech-to-Text, BERT, and GPT-Neo models for intent recognition and command generation, combined with ROS-based motion control for object retrieval. Usability and perception were analyzed using System Usability Scale (SUS) and Human–Robot Collaboration Questionnaire (HRCQ) metrics through paired t-tests and correlation analyses. Results show a preference for unstructured speech (p = 0.0032) with higher SUS scores, while robot execution errors affected perceived safety but not overall usability, consistent with the Pratfall Effect. The findings inform the design of natural, robust, and user-centric speech interfaces for collaborative robots.
{"title":"Structured and Unstructured Speech2Action Frameworks for Human–Robot Collaboration: A User Study","authors":"Krishna Kodur;Manizheh Zand;Matthew Tognotti;Cinthya Járegui;Maria Kyrarini","doi":"10.1109/ACCESS.2026.3653715","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3653715","url":null,"abstract":"Practical and intuitive communication remains a critical challenge in Human-Robot Collaboration, particularly within domestic environments. Conventional systems typically rely on structured (scripted) speech inputs, which may limit natural interaction and accessibility. This study evaluates user preferences and system usability between structured and unstructured (conversational) speech modalities in a collaborative cooking scenario using a mobile manipulator robot. Thirty adult participants engaged in tasks involving both communication modes, during which the frequency and impact of robot execution errors were also assessed. The proposed Speech2Action framework integrates Google Cloud Speech-to-Text, BERT, and GPT-Neo models for intent recognition and command generation, combined with ROS-based motion control for object retrieval. Usability and perception were analyzed using System Usability Scale (SUS) and Human–Robot Collaboration Questionnaire (HRCQ) metrics through paired t-tests and correlation analyses. Results show a preference for unstructured speech (p = 0.0032) with higher SUS scores, while robot execution errors affected perceived safety but not overall usability, consistent with the Pratfall Effect. The findings inform the design of natural, robust, and user-centric speech interfaces for collaborative robots.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"10782-10796"},"PeriodicalIF":3.6,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11348049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/ACCESS.2026.3651866
Jung Yeon Hwang;Jong Hwan Park
Asymmetric broadcast encryption (ABE) allows a sender, given the public keys or identities of recipients, to encrypt a message such that only an authorized subset of users can decrypt it. In fully asymmetric settings, where any user may act as a sender, ciphertext generation time and ciphertext size become critical performance metrics. However, most existing ABE schemes impose substantial sender-side computational costs and scale poorly with system size. This paper presents new ABE constructions that achieve fast ciphertext generation while maintaining compact ciphertexts. Our schemes are built upon the identity-based revocation (IBR) framework, enabling each user’s identity to serve directly as a public key. We first propose a basic IBR scheme that produces constant-size ciphertexts independent of the number of recipients or revoked users, achieving efficient encryption through optimized hash-to-point and aggregation techniques. We then extend this design to a tree-based construction that supports large-scale systems and offers a practical trade-off among encryption cost, decryption efficiency, and secret-key size. Both schemes are proven CPA-secure under a modified Decisional Bilinear Diffie–Hellman (mDBDH) assumption in the random-oracle model. Extensive experiments with concrete parameters demonstrate that our schemes significantly outperform existing asymmetric revocation approaches. For a system with $10^{6}$ users and a revocation rate of 1.5–3%, prior schemes require tens of seconds to generate a ciphertext, whereas our constructions complete encryption within 1.6 seconds while keeping the ciphertext size nearly constant (below $10^{2}$ KB).
{"title":"Constructing Identity-Based Revocation Schemes for Efficient Generation of Ciphertexts","authors":"Jung Yeon Hwang;Jong Hwan Park","doi":"10.1109/ACCESS.2026.3651866","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651866","url":null,"abstract":"Asymmetric broadcast encryption (ABE) allows a sender, given the public keys or identities of recipients, to encrypt a message such that only an authorized subset of users can decrypt it. In fully asymmetric settings, where any user may act as a sender, ciphertext generation time and ciphertext size become critical performance metrics. However, most existing ABE schemes impose substantial sender-side computational costs and scale poorly with system size. This paper presents new ABE constructions that achieve fast ciphertext generation while maintaining compact ciphertexts. Our schemes are built upon the identity-based revocation (IBR) framework, enabling each user’s identity to serve directly as a public key. We first propose a basic IBR scheme that produces constant-size ciphertexts independent of the number of recipients or revoked users, achieving efficient encryption through optimized hash-to-point and aggregation techniques. We then extend this design to a tree-based construction that supports large-scale systems and offers a practical trade-off among encryption cost, decryption efficiency, and secret-key size. Both schemes are proven CPA-secure under a modified Decisional Bilinear Diffie–Hellman (mDBDH) assumption in the random-oracle model. Extensive experiments with concrete parameters demonstrate that our schemes significantly outperform existing asymmetric revocation approaches. For a system with <inline-formula> <tex-math>$10^{6}$ </tex-math></inline-formula> users and a revocation rate of 1.5–3%, prior schemes require tens of seconds to generate a ciphertext, whereas our constructions complete encryption within 1.6 seconds while keeping the ciphertext size nearly constant (below <inline-formula> <tex-math>$10^{2}$ </tex-math></inline-formula> KB).","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7730-7743"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/ACCESS.2026.3653467
Xizi Chen;Shin Kawai;Triet Nguyen-van
Finite control set model predictive control has garnered significant attention in inverter control due to its compatibility with the discrete nature of power electronic systems. However, a major limitation of normal finite control set model predictive control is its inherently variable switching frequency, which fluctuates with operating conditions and system parameters. This variability poses challenges for inverter performance, complicates filter design, and increases harmonic distortion. To address these issues, this paper proposes a fixed switching frequency model predictive control method, which is based on an adaptive bandwidth-based control method that enables fixed switching frequency operation while preserving the core advantages of finite control set model predictive control. The proposed approach derives a dynamic relationship between the switching frequency and system voltage, allowing the controller to maintain a desired frequency without modifying the cost function. Simulation studies on a single-phase half-bridge inverter demonstrate that the method effectively stabilizes the switching frequency, maintains robustness against parameter variations, and achieves reliable tracking performance.
{"title":"A Fixed Switching Frequency Model Predictive Control for Half-Bridge Inverter","authors":"Xizi Chen;Shin Kawai;Triet Nguyen-van","doi":"10.1109/ACCESS.2026.3653467","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3653467","url":null,"abstract":"Finite control set model predictive control has garnered significant attention in inverter control due to its compatibility with the discrete nature of power electronic systems. However, a major limitation of normal finite control set model predictive control is its inherently variable switching frequency, which fluctuates with operating conditions and system parameters. This variability poses challenges for inverter performance, complicates filter design, and increases harmonic distortion. To address these issues, this paper proposes a fixed switching frequency model predictive control method, which is based on an adaptive bandwidth-based control method that enables fixed switching frequency operation while preserving the core advantages of finite control set model predictive control. The proposed approach derives a dynamic relationship between the switching frequency and system voltage, allowing the controller to maintain a desired frequency without modifying the cost function. Simulation studies on a single-phase half-bridge inverter demonstrate that the method effectively stabilizes the switching frequency, maintains robustness against parameter variations, and achieves reliable tracking performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7895-7906"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11346970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/ACCESS.2026.3651796
Lin Jiaxin;Huang Yuetian;Li Ruiqi;Qin Qiang;Huang Hanye
Heterogeneous computing networks formed by modern cloud and edge collaboration pose dynamic resource scheduling as a key bottleneck for system performance. Rule-based traditional methods falter in handling environmental shifts, while most current deep reinforcement learning schedulers use single-decision architectures, struggling to balance global optimization with local real-time responses. Addressing this core issue, this study proposes a novel hierarchical graph meta-reinforcement learning scheduler (HG-MRLS). The framework employs a dual-layer decision mechanism, decomposing complex scheduling into macro strategic planning and micro tactical execution. At the upper layer, we designed a graph attention network-based macro scheduler that learns forward-looking resource allocation via encoding global topology and dynamic features. At the lower layer, we built a meta-reinforcement learning-based micro scheduler for rapid self-adaptation in unknown local environments. To validate effectiveness, we set up a simulation platform based on real workloads. Extensive experiments show HG-MRLS outperforms various classic heuristics and state-of-the-art deep RL methods in key metrics like average job completion time, deadline satisfaction rate, and resource utilization. Especially under high loads and sudden dynamic events, the framework exhibits outstanding stability and environmental adaptability. Its core scientific contribution is in demonstrating a new paradigm for resolving the inherent tension between global strategic planning and local real-time responses in complex, dynamic systems, offering a scalable blueprint for multi-scale intelligent control.
{"title":"HG-MRLS: A Hierarchical Graph Meta-Reinforcement Learning Framework for Dynamic Scheduling in Heterogeneous Computing Networks","authors":"Lin Jiaxin;Huang Yuetian;Li Ruiqi;Qin Qiang;Huang Hanye","doi":"10.1109/ACCESS.2026.3651796","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651796","url":null,"abstract":"Heterogeneous computing networks formed by modern cloud and edge collaboration pose dynamic resource scheduling as a key bottleneck for system performance. Rule-based traditional methods falter in handling environmental shifts, while most current deep reinforcement learning schedulers use single-decision architectures, struggling to balance global optimization with local real-time responses. Addressing this core issue, this study proposes a novel hierarchical graph meta-reinforcement learning scheduler (HG-MRLS). The framework employs a dual-layer decision mechanism, decomposing complex scheduling into macro strategic planning and micro tactical execution. At the upper layer, we designed a graph attention network-based macro scheduler that learns forward-looking resource allocation via encoding global topology and dynamic features. At the lower layer, we built a meta-reinforcement learning-based micro scheduler for rapid self-adaptation in unknown local environments. To validate effectiveness, we set up a simulation platform based on real workloads. Extensive experiments show HG-MRLS outperforms various classic heuristics and state-of-the-art deep RL methods in key metrics like average job completion time, deadline satisfaction rate, and resource utilization. Especially under high loads and sudden dynamic events, the framework exhibits outstanding stability and environmental adaptability. Its core scientific contribution is in demonstrating a new paradigm for resolving the inherent tension between global strategic planning and local real-time responses in complex, dynamic systems, offering a scalable blueprint for multi-scale intelligent control.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7792-7811"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11339505","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/ACCESS.2026.3651993
Hugo Leite;Dênis Leite;Diego Rativa
The emergence of Industry 4.0 has catalyzed the integration of advanced technologies to enhance manufacturing efficiency, reliability, and competitiveness. Fault Detection and Diagnosis (FDD) systems are critical for minimizing downtime and ensuring operational continuity. This research investigates the integration of Digital Twin (DT) technology with Machine Learning (ML) models for real-time fault detection and diagnosis (RT-FDD) in discrete manufacturing machines. Two industrial systems—a Pick-and-Place machine and a Furnace—were modeled using linear and non-linear models to develop Digital Twins. The method is validated on a simulated environment replicating real industrial behavior, combining DT-generated features with conventional real-time process data. The proposed approach improved F1 scores by up to 11% and demonstrated enhanced robustness in both inter-cycle and intra-cycle fault detection tasks. Notably, for the Furnace machine, the method enabled fault detection 40% earlier in the cycle while maintaining the same F1 Score of 94%, and provided reliable diagnosis with an F1 Score of 80% at only 15% of cycle completion. A comprehensive evaluation of 16 ML algorithms highlighted the effectiveness of DT features in boosting predictive performance.
{"title":"Early Fault Detection and Diagnosis in Industrial Machines: An Approach Using Digital Twins","authors":"Hugo Leite;Dênis Leite;Diego Rativa","doi":"10.1109/ACCESS.2026.3651993","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3651993","url":null,"abstract":"The emergence of Industry 4.0 has catalyzed the integration of advanced technologies to enhance manufacturing efficiency, reliability, and competitiveness. Fault Detection and Diagnosis (FDD) systems are critical for minimizing downtime and ensuring operational continuity. This research investigates the integration of Digital Twin (DT) technology with Machine Learning (ML) models for real-time fault detection and diagnosis (RT-FDD) in discrete manufacturing machines. Two industrial systems—a Pick-and-Place machine and a Furnace—were modeled using linear and non-linear models to develop Digital Twins. The method is validated on a simulated environment replicating real industrial behavior, combining DT-generated features with conventional real-time process data. The proposed approach improved F1 scores by up to 11% and demonstrated enhanced robustness in both inter-cycle and intra-cycle fault detection tasks. Notably, for the Furnace machine, the method enabled fault detection 40% earlier in the cycle while maintaining the same F1 Score of 94%, and provided reliable diagnosis with an F1 Score of 80% at only 15% of cycle completion. A comprehensive evaluation of 16 ML algorithms highlighted the effectiveness of DT features in boosting predictive performance.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"7829-7840"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11338757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/ACCESS.2025.3645071
Ben Rahman;Maryani;Thoyyibah T.
IDChat (Internet-Dependent Cryptographic Hybrid Authentication Technology) is a universal digital identity framework designed to eliminate costly and vulnerable SMS-based one-time passwords (OTPs). It fuses civil identity (NIK), multi-biometric modalities (fingerprint and retina), and genomic-derived entropy to reinforce cryptographic key generation under a unified fusion engine. Targeting Indonesia’s 120 million internet users—90% of whom still rely on paid SMS OTPs—IDChat introduces a privacy-preserving Digital Genetic Signature (DGS) generated via SHA-256 hashing and homomorphic encryption (BFV scheme), enhancing resistance to spoofing and brute-force attacks. Experimental validation using OpenCV with FVC2004 and CASIA-Iris datasets achieved 99.1% authentication accuracy, a false acceptance rate (FAR) of 0.008%, and an average latency of 1.9 seconds, demonstrating competitive efficiency against existing biometric systems. A comparative cost analysis indicates potential national savings of Rp 12–50 trillion annually by replacing SMS OTPs with free Wi-Fi–based verification. Unlike centralized frameworks such as Aadhaar or FIDO2, IDChat performs local (offline) verification within closed Wi-Fi environments through cached encrypted templates, ensuring independence from cellular networks. DNA information functions as a static entropy factor enrolled once during registration, avoiding any real-time biological sampling. This study presents the first technically validated multi-biometric and DNA-derived cryptographic fusion model optimized for secure, inclusive, and cost-efficient digital authentication in resource-constrained environments.
{"title":"IDChat: Toward a Universal, Multi-Biometric Digital Identity for Next-Generation Secure Communication in ASEAN","authors":"Ben Rahman;Maryani;Thoyyibah T.","doi":"10.1109/ACCESS.2025.3645071","DOIUrl":"https://doi.org/10.1109/ACCESS.2025.3645071","url":null,"abstract":"IDChat (Internet-Dependent Cryptographic Hybrid Authentication Technology) is a universal digital identity framework designed to eliminate costly and vulnerable SMS-based one-time passwords (OTPs). It fuses civil identity (NIK), multi-biometric modalities (fingerprint and retina), and genomic-derived entropy to reinforce cryptographic key generation under a unified fusion engine. Targeting Indonesia’s 120 million internet users—90% of whom still rely on paid SMS OTPs—IDChat introduces a privacy-preserving Digital Genetic Signature (DGS) generated via SHA-256 hashing and homomorphic encryption (BFV scheme), enhancing resistance to spoofing and brute-force attacks. Experimental validation using OpenCV with FVC2004 and CASIA-Iris datasets achieved 99.1% authentication accuracy, a false acceptance rate (FAR) of 0.008%, and an average latency of 1.9 seconds, demonstrating competitive efficiency against existing biometric systems. A comparative cost analysis indicates potential national savings of Rp 12–50 trillion annually by replacing SMS OTPs with free Wi-Fi–based verification. Unlike centralized frameworks such as Aadhaar or FIDO2, IDChat performs local (offline) verification within closed Wi-Fi environments through cached encrypted templates, ensuring independence from cellular networks. DNA information functions as a static entropy factor enrolled once during registration, avoiding any real-time biological sampling. This study presents the first technically validated multi-biometric and DNA-derived cryptographic fusion model optimized for secure, inclusive, and cost-efficient digital authentication in resource-constrained environments.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"14892-14902"},"PeriodicalIF":3.6,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11343738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}