首页 > 最新文献

IEEE Open Journal of the Computer Society最新文献

英文 中文
A Probabilistic Method for Hierarchical Multisubject Classification of Documents Based on Multilingual Subject Term Vocabularies 基于多语言主题词词汇的文档分层多主题分类概率方法
Pub Date : 2025-07-23 DOI: 10.1109/OJCS.2025.3592254
Nikolaos Makris;Stamatina K. Koutsileou;Nikolaos Mitrou
Hierarchical Multilabel Classification (HMC) is a challenging task in information retrieval, especially within scientific textbooks, where the objective is to allocate multiple labels adhering to a hierarchical taxonomy. This research presents a new language neutral methodology for HMC to assess documents as normalised weighted distributions of well-defined subjects across hierarchical levels, based on a hierarchical subject term vocabulary. The proposed approach utilizes Bayesian formulas, in contrast to typical methods that depend on machine learning models, thereby obviating the necessity for resource-intensive training processes at various hierarchical levels. The method integrates refined pre-processing techniques, such as natural language processing (NLP) and filtering of non-distinctive terms, to enhance classification accuracy. It employs Bayesian inference along with real time and cached computations across all hierarchical levels, yielding an effective, time-efficient and interpretable classification method while ensuring scalability for large datasets. Experimental results demonstrate the potency of the algorithm to classify scientific textbooks across hierarchical subject tiers with significant precision and recall and retrieve semantically related scientific textbooks, thereby verifying its efficacy in tasks requiring hierarchical subject classification. This study presents a streamlined, interpretable alternative to model-dependent HMC approaches, rendering it particularly appropriate for real-world applications in educational and scientific fields. Furthermore, in the context of the present study, two public Web User Interfaces were published, the first is founded on Skosmos to illustrate the hierarchical structure of the subject term vocabulary, while the second one employs the HMC method to present in real-time the classification between subjects in English and Greek textual data.
分层多标签分类(HMC)在信息检索中是一项具有挑战性的任务,特别是在科学教科书中,其目标是分配遵循分层分类法的多个标签。本研究提出了一种新的语言中立方法,用于HMC评估文档,将其作为基于分层主题术语词汇的分层层次中定义良好的主题的标准化加权分布。与依赖机器学习模型的典型方法相比,所提出的方法利用贝叶斯公式,从而避免了在不同层次上进行资源密集型训练过程的必要性。该方法结合了自然语言处理(NLP)和非显著词过滤等精细预处理技术,提高了分类精度。它采用贝叶斯推理以及所有层次的实时和缓存计算,产生有效,时间高效和可解释的分类方法,同时确保大型数据集的可扩展性。实验结果表明,该算法可以跨层次学科层对科学教科书进行分类,具有显著的准确率,并且可以检索和召回语义相关的科学教科书,从而验证了该算法在需要层次学科分类的任务中的有效性。本研究提出了一种精简的、可解释的替代模型依赖的HMC方法,使其特别适合于教育和科学领域的实际应用。此外,在本研究的背景下,我们发布了两个公开的Web用户界面,第一个是基于Skosmos来说明主题术语词汇的层次结构,第二个是采用HMC方法来实时呈现英语和希腊语文本数据中的主题分类。
{"title":"A Probabilistic Method for Hierarchical Multisubject Classification of Documents Based on Multilingual Subject Term Vocabularies","authors":"Nikolaos Makris;Stamatina K. Koutsileou;Nikolaos Mitrou","doi":"10.1109/OJCS.2025.3592254","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3592254","url":null,"abstract":"Hierarchical Multilabel Classification (HMC) is a challenging task in information retrieval, especially within scientific textbooks, where the objective is to allocate multiple labels adhering to a hierarchical taxonomy. This research presents a new language neutral methodology for HMC to assess documents as normalised weighted distributions of well-defined subjects across hierarchical levels, based on a hierarchical subject term vocabulary. The proposed approach utilizes Bayesian formulas, in contrast to typical methods that depend on machine learning models, thereby obviating the necessity for resource-intensive training processes at various hierarchical levels. The method integrates refined pre-processing techniques, such as natural language processing (NLP) and filtering of non-distinctive terms, to enhance classification accuracy. It employs Bayesian inference along with real time and cached computations across all hierarchical levels, yielding an effective, time-efficient and interpretable classification method while ensuring scalability for large datasets. Experimental results demonstrate the potency of the algorithm to classify scientific textbooks across hierarchical subject tiers with significant precision and recall and retrieve semantically related scientific textbooks, thereby verifying its efficacy in tasks requiring hierarchical subject classification. This study presents a streamlined, interpretable alternative to model-dependent HMC approaches, rendering it particularly appropriate for real-world applications in educational and scientific fields. Furthermore, in the context of the present study, two public Web User Interfaces were published, the first is founded on Skosmos to illustrate the hierarchical structure of the subject term vocabulary, while the second one employs the HMC method to present in real-time the classification between subjects in English and Greek textual data.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1294-1305"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11095338","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Security of Internet of Agents: Attacks and Countermeasures 代理互联网的安全:攻击与对策
Pub Date : 2025-07-16 DOI: 10.1109/OJCS.2025.3589638
Yuntao Wang;Yanghe Pan;Shaolong Guo;Zhou Su
With the rise of large language and vision-language models, AI agents have evolved into autonomous, interactive systems capable of perception, reasoning, and decision-making. As they proliferate across virtual and physical domains, the Internet of Agents (IoA) has emerged as a key infrastructure for enabling scalable and secure coordination among heterogeneous agents. This survey offers a comprehensive examination of the security and privacy landscape in IoA systems. We begin by outlining the IoA architecture and its distinct vulnerabilities compared to traditional networks, focusing on four critical aspects: identity authentication threats, cross-agent trust issues, embodied security, and privacy risks. We then review existing and emerging defense mechanisms and highlight persistent challenges. Finally, we identify open research directions to advance the development of resilient and privacy-preserving IoA ecosystems.
随着大型语言和视觉语言模型的兴起,人工智能代理已经进化成能够感知、推理和决策的自主、互动系统。随着它们在虚拟和物理领域的扩散,代理互联网(IoA)已经成为支持异构代理之间可扩展和安全协调的关键基础设施。这项调查对IoA系统的安全和隐私状况进行了全面的检查。我们首先概述了IoA架构及其与传统网络相比的明显漏洞,重点关注四个关键方面:身份认证威胁、跨代理信任问题、嵌入安全性和隐私风险。然后,我们回顾了现有的和新兴的防御机制,并强调了持续的挑战。最后,我们确定了开放的研究方向,以促进弹性和隐私保护IoA生态系统的发展。
{"title":"Security of Internet of Agents: Attacks and Countermeasures","authors":"Yuntao Wang;Yanghe Pan;Shaolong Guo;Zhou Su","doi":"10.1109/OJCS.2025.3589638","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3589638","url":null,"abstract":"With the rise of large language and vision-language models, AI agents have evolved into autonomous, interactive systems capable of perception, reasoning, and decision-making. As they proliferate across virtual and physical domains, the Internet of Agents (IoA) has emerged as a key infrastructure for enabling scalable and secure coordination among heterogeneous agents. This survey offers a comprehensive examination of the security and privacy landscape in IoA systems. We begin by outlining the IoA architecture and its distinct vulnerabilities compared to traditional networks, focusing on four critical aspects: identity authentication threats, cross-agent trust issues, embodied security, and privacy risks. We then review existing and emerging defense mechanisms and highlight persistent challenges. Finally, we identify open research directions to advance the development of resilient and privacy-preserving IoA ecosystems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1611-1624"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11081880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Cross-Channel Image Watermarking Technique for Tamper Detection and its Precise Localization 一种鲁棒跨通道图像水印篡改检测技术及其精确定位
Pub Date : 2025-07-16 DOI: 10.1109/OJCS.2025.3589948
Muhammad Ashraf;Adnan Nadeem;Oussama Benrhouma;Muhammad Sarim;Kashif Rizwan;Amir Mehmood
Several watermarking techniques have been suggested to safeguard the integrity of transmitted images in public video surveillance applications. However, these techniques have a critical drawback in their embedding schemes: the watermark is limited to residing in a narrow traceable space to avoid fidelity issues. Such a protection layer can be evaluated or forcefully removed to breach data security. Once the protection layer (watermark) is removed, a watermarking algorithm cannot pinpoint the falsified regions in affected images and gives a binary answer. Consequently, attackers can present the falsification of visual elements as a non-malicious perturbation. Such a type of attack poses a serious security challenge. This study introduces a novel cross-channel image watermarking technique that randomly scatters the watermark pattern across a 24-bit image structure so that no emergence of embedding signatures and fidelity issues occurs after the process. Chaotic systems are employed to leverage their sensitivity to initial conditions and control parameters, resulting in high confusion and diffusion properties in the proposed scheme. The protection layer is completely intractable as it is randomly scattered in the entire RGB space, making it very hard to remove without leaving a clear footprint in affected images. This method creates a good balance between security and imperceptibility, it effectively detects and localizes falsified regions in tampered images, and maintains this ability until clear evidence of a removal attempt emerges in histograms. This property makes proposed algorithm a preferred choice for data integrity protection; it achieved an average F1-score of 0.97 for tamper detection.
在公共视频监控应用中,为了保证传输图像的完整性,提出了几种水印技术。然而,这些技术在其嵌入方案中有一个关键的缺点:水印被限制在一个狭窄的可跟踪空间中,以避免保真度问题。这样的保护层可以被评估或强制移除以破坏数据安全。一旦去除了水印保护层,水印算法就无法精确定位受影响图像中的伪造区域,只能给出一个二值解。因此,攻击者可以将视觉元素的伪造呈现为非恶意扰动。这种类型的攻击构成了严重的安全挑战。本研究提出了一种新的跨通道图像水印技术,该技术将水印模式随机分散在24位图像结构中,从而不会出现嵌入签名和保真度问题。混沌系统利用其对初始条件和控制参数的敏感性,使得该方案具有较高的混沌和扩散特性。保护层是完全难以处理的,因为它随机地分散在整个RGB空间中,因此很难在不在受影响的图像中留下清晰足迹的情况下删除它。该方法在安全性和不可感知性之间取得了良好的平衡,它有效地检测和定位篡改图像中的伪造区域,并保持这种能力,直到在直方图中出现明显的删除企图的证据。这一特性使该算法成为数据完整性保护的首选算法;篡改检测的平均f1得分为0.97。
{"title":"A Robust Cross-Channel Image Watermarking Technique for Tamper Detection and its Precise Localization","authors":"Muhammad Ashraf;Adnan Nadeem;Oussama Benrhouma;Muhammad Sarim;Kashif Rizwan;Amir Mehmood","doi":"10.1109/OJCS.2025.3589948","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3589948","url":null,"abstract":"Several watermarking techniques have been suggested to safeguard the integrity of transmitted images in public video surveillance applications. However, these techniques have a critical drawback in their embedding schemes: the watermark is limited to residing in a narrow traceable space to avoid fidelity issues. Such a protection layer can be evaluated or forcefully removed to breach data security. Once the protection layer (watermark) is removed, a watermarking algorithm cannot pinpoint the falsified regions in affected images and gives a binary answer. Consequently, attackers can present the falsification of visual elements as a non-malicious perturbation. Such a type of attack poses a serious security challenge. This study introduces a novel cross-channel image watermarking technique that randomly scatters the watermark pattern across a 24-bit image structure so that no emergence of embedding signatures and fidelity issues occurs after the process. Chaotic systems are employed to leverage their sensitivity to initial conditions and control parameters, resulting in high confusion and diffusion properties in the proposed scheme. The protection layer is completely intractable as it is randomly scattered in the entire RGB space, making it very hard to remove without leaving a clear footprint in affected images. This method creates a good balance between security and imperceptibility, it effectively detects and localizes falsified regions in tampered images, and maintains this ability until clear evidence of a removal attempt emerges in histograms. This property makes proposed algorithm a preferred choice for data integrity protection; it achieved an average F1-score of 0.97 for tamper detection.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1202-1213"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11081476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing Industrial IoT Environments: A Fuzzy Graph Attention Network for Robust Intrusion Detection 保护工业物联网环境:用于鲁棒入侵检测的模糊图关注网络
Pub Date : 2025-07-10 DOI: 10.1109/OJCS.2025.3587486
Safa Ben Atitallah;Maha Driss;Wadii Boulila;Anis Koubaa
The Industrial Internet of Things (IIoT) faces significant cybersecurity threats due to its ever-changing network structures, diverse data sources, and inherent uncertainties, making robust intrusion detection crucial. Conventional machine learning methods and typical Graph Neural Networks (GNNs) often struggle to capture the complexity and uncertainty in IIoT network traffic, which hampers their effectiveness in detecting intrusions. To address these limitations, we propose the Fuzzy Graph Attention Network (FGATN), a novel intrusion detection framework that fuses fuzzy logic, graph attention mechanisms, and GNNs to deliver high accuracy and robustness in IIoT environments. FGATN introduces three core innovations: (1) fuzzy membership functions to explicitly model uncertainty and imprecision in traffic features; (2) fuzzy similarity-based graph construction with adaptive edge pruning to build meaningful graph topologies that reflect real-world communication patterns; and (3) an attention-guided fuzzy graph convolution mechanism that dynamically prioritizes reliable and task-relevant neighbors during message passing. We evaluate FGATN on three public intrusion datasets, Edge-IIoTSet, WSN-DS, and CIC-Malmem-2022, achieving accuracies of 99.07%, 99.20%, and 99.05%, respectively. It consistently outperforms state-of-the-art GNN (GCN, GraphSAGE, FGCN) and deep learning models (DNN, GRU, RobustCBL). Ablation studies confirm the essential roles of both fuzzy logic and attention mechanisms in boosting detection accuracy. Furthermore, FGATN demonstrates strong scalability, maintaining high performance across a range of varying graph sizes. These results highlight FGATN as a robust and scalable solution for next-generation IIoT intrusion detection systems.
由于其不断变化的网络结构、多样化的数据源和固有的不确定性,工业物联网(IIoT)面临着重大的网络安全威胁,这使得强大的入侵检测至关重要。传统的机器学习方法和典型的图神经网络(gnn)往往难以捕捉工业物联网网络流量的复杂性和不确定性,这阻碍了它们检测入侵的有效性。为了解决这些限制,我们提出了模糊图注意网络(FGATN),这是一种融合模糊逻辑、图注意机制和gnn的新型入侵检测框架,可在工业物联网环境中提供高精度和鲁棒性。FGATN引入了三个核心创新:(1)模糊隶属函数,明确建模交通特征的不确定性和不精确性;(2)基于模糊相似度的自适应边缘剪枝图构建,构建反映现实世界通信模式的有意义的图拓扑;(3)一种注意力引导模糊图卷积机制,在消息传递过程中对可靠的、任务相关的邻居进行动态优先级排序。我们在Edge-IIoTSet、WSN-DS和CIC-Malmem-2022三个公共入侵数据集上对FGATN进行了评估,准确率分别达到了99.07%、99.20%和99.05%。它始终优于最先进的GNN (GCN, GraphSAGE, FGCN)和深度学习模型(DNN, GRU, RobustCBL)。消融研究证实了模糊逻辑和注意机制在提高检测精度方面的重要作用。此外,FGATN展示了强大的可扩展性,在不同的图大小范围内保持高性能。这些结果表明,FGATN是下一代工业物联网入侵检测系统的强大且可扩展的解决方案。
{"title":"Securing Industrial IoT Environments: A Fuzzy Graph Attention Network for Robust Intrusion Detection","authors":"Safa Ben Atitallah;Maha Driss;Wadii Boulila;Anis Koubaa","doi":"10.1109/OJCS.2025.3587486","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587486","url":null,"abstract":"The Industrial Internet of Things (IIoT) faces significant cybersecurity threats due to its ever-changing network structures, diverse data sources, and inherent uncertainties, making robust intrusion detection crucial. Conventional machine learning methods and typical Graph Neural Networks (GNNs) often struggle to capture the complexity and uncertainty in IIoT network traffic, which hampers their effectiveness in detecting intrusions. To address these limitations, we propose the Fuzzy Graph Attention Network (FGATN), a novel intrusion detection framework that fuses fuzzy logic, graph attention mechanisms, and GNNs to deliver high accuracy and robustness in IIoT environments. FGATN introduces three core innovations: (1) fuzzy membership functions to explicitly model uncertainty and imprecision in traffic features; (2) fuzzy similarity-based graph construction with adaptive edge pruning to build meaningful graph topologies that reflect real-world communication patterns; and (3) an attention-guided fuzzy graph convolution mechanism that dynamically prioritizes reliable and task-relevant neighbors during message passing. We evaluate FGATN on three public intrusion datasets, Edge-IIoTSet, WSN-DS, and CIC-Malmem-2022, achieving accuracies of 99.07%, 99.20%, and 99.05%, respectively. It consistently outperforms state-of-the-art GNN (GCN, GraphSAGE, FGCN) and deep learning models (DNN, GRU, RobustCBL). Ablation studies confirm the essential roles of both fuzzy logic and attention mechanisms in boosting detection accuracy. Furthermore, FGATN demonstrates strong scalability, maintaining high performance across a range of varying graph sizes. These results highlight FGATN as a robust and scalable solution for next-generation IIoT intrusion detection systems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1065-1076"},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11075530","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey and Evaluation of Converging Architecture in LLMs Based on Footsteps of Operations 基于操作脚步的法学硕士融合体系结构综述与评价
Pub Date : 2025-07-08 DOI: 10.1109/OJCS.2025.3587005
Seongho Kim;Jihyun Moon;Juntaek Oh;Insu Choi;Joon-Sung Yang
Large language models (LLMs), which have emerged from advances in natural language processing (NLP), enable chatbots, virtual assistants, and numerous domain-specific applications. These models, often comprising billions of parameters, leverage the Transformer architecture and Attention mechanisms to process context effectively and address long-term dependencies more efficiently than earlier approaches, such as recurrent neural networks (RNNs). Notably, since the introduction of Llama, the architectural development of LLMs has significantly converged, predominantly settling on a Transformer-based decoder-only architecture. The evolution of LLMs has been driven by advances in high-bandwidth memory, specialized accelerators, and optimized architectures, enabling models to scale to billions of parameters. However, it also introduces new challenges: meeting compute and memory efficiency requirements across diverse deployment targets, ranging from data center servers to resource-constrained edge devices. To address these challenges, we survey the evolution of LLMs at two complementary levels: architectural trends and their underlying operational mechanisms. Furthermore, we quantify how hyperparameter settings influence inference latency by profiling kernel-level execution on a modern GPU architecture. Our findings reveal that identical models can exhibit varying performance based on hyperparameter configurations and deployment contexts, emphasizing the need for scalable and efficient solutions. The insights distilled from this analysis guide the optimization of performance and efficiency within these converged LLM architectures, thereby extending their applicability across a broader range of environments.
从自然语言处理(NLP)的进步中出现的大型语言模型(llm)支持聊天机器人、虚拟助手和许多特定领域的应用程序。这些模型通常包含数十亿个参数,它们利用Transformer架构和注意力机制来有效地处理上下文,并比早期的方法(如循环神经网络(rnn))更有效地处理长期依赖关系。值得注意的是,自从引入Llama以来,llm的架构开发已经明显地融合在一起,主要建立在基于transformer的仅解码器的架构上。高带宽内存、专用加速器和优化架构的进步推动了llm的发展,使模型能够扩展到数十亿个参数。然而,它也带来了新的挑战:满足不同部署目标(从数据中心服务器到资源受限的边缘设备)的计算和内存效率要求。为了应对这些挑战,我们从两个互补的层面调查法学硕士的发展:架构趋势及其潜在的操作机制。此外,我们通过分析现代GPU架构上的内核级执行来量化超参数设置如何影响推理延迟。我们的研究结果表明,基于超参数配置和部署上下文,相同的模型可以表现出不同的性能,这强调了对可扩展和高效解决方案的需求。从该分析中提取的见解指导了这些融合LLM体系结构中的性能和效率的优化,从而扩展了它们在更广泛的环境中的适用性。
{"title":"Survey and Evaluation of Converging Architecture in LLMs Based on Footsteps of Operations","authors":"Seongho Kim;Jihyun Moon;Juntaek Oh;Insu Choi;Joon-Sung Yang","doi":"10.1109/OJCS.2025.3587005","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587005","url":null,"abstract":"Large language models (LLMs), which have emerged from advances in natural language processing (NLP), enable chatbots, virtual assistants, and numerous domain-specific applications. These models, often comprising billions of parameters, leverage the Transformer architecture and Attention mechanisms to process context effectively and address long-term dependencies more efficiently than earlier approaches, such as recurrent neural networks (RNNs). Notably, since the introduction of Llama, the architectural development of LLMs has significantly converged, predominantly settling on a Transformer-based decoder-only architecture. The evolution of LLMs has been driven by advances in high-bandwidth memory, specialized accelerators, and optimized architectures, enabling models to scale to billions of parameters. However, it also introduces new challenges: meeting compute and memory efficiency requirements across diverse deployment targets, ranging from data center servers to resource-constrained edge devices. To address these challenges, we survey the evolution of LLMs at two complementary levels: architectural trends and their underlying operational mechanisms. Furthermore, we quantify how hyperparameter settings influence inference latency by profiling kernel-level execution on a modern GPU architecture. Our findings reveal that identical models can exhibit varying performance based on hyperparameter configurations and deployment contexts, emphasizing the need for scalable and efficient solutions. The insights distilled from this analysis guide the optimization of performance and efficiency within these converged LLM architectures, thereby extending their applicability across a broader range of environments.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1214-1226"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072851","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Robust Image Encryption Protocol for Secure Data Sharing in Brain Computer Interface Applications 一种用于脑机接口安全数据共享的鲁棒图像加密协议
Pub Date : 2025-07-08 DOI: 10.1109/OJCS.2025.3587014
Sunil Prajapat;Pankaj Kumar;Kashish Chaudhary;Kranti Kumar;Gyanendra Kumar;Ali Kashif Bashir
Brain-computer interface (BCI) technology has emerged as a transformative means to link human neural activity with electronic devices. BCIs, which facilitate bidirectional communication between the brain and computers, are categorized as invasive, semi-invasive, and non-invasive. EEG (electroencephalography), a non-invasive technique recorded via electrodes placed on the scalp, serves as the primary data source for BCI systems. P300, a component of the human brain’s event-related potential, has gained prominence for detecting cognitive responses to stimuli. However, the susceptibility of BCI data to tampering during transmission underscores the critical need for robust security and privacy measures. To address security issues in P300-based BCI systems, this article introduces a novel elliptic curve-based certificateless encryption (CLE) technique integrated with image encryption protocols to safeguard the open communication pathway between near control and remote control devices. Our approach, unique in its exploration of ECC-based encryption for these systems, offers distinct advantages in security, demonstrating high accuracy in preserving data integrity and confidentiality. The security of our proposed scheme is rigorously validated using the Random Oracle Model. Simulations conducted using MATLAB evaluate the proposed image encryption protocol both theoretically and statistically, showing strong encryption performance against recent methods. Results include an entropy value of 7.98, Unified Average Changing Intensity (UACI) of 33.4%, Normalized Pixel Change Rate (NPCR) of 99.6%, and negative correlation coefficient values, indicating efficient and effective encryption and decryption processes.
脑机接口(BCI)技术作为一种将人类神经活动与电子设备连接起来的变革性手段而出现。脑机接口可以促进大脑和计算机之间的双向通信,分为侵入性、半侵入性和非侵入性。EEG(脑电图)是一种通过放置在头皮上的电极记录的非侵入性技术,是脑机接口系统的主要数据源。P300是人脑事件相关电位的一个组成部分,在检测对刺激的认知反应方面得到了突出的研究。然而,BCI数据在传输过程中容易被篡改,这凸显了对强大的安全和隐私措施的迫切需要。为了解决基于p300的BCI系统的安全问题,本文介绍了一种新的基于椭圆曲线的无证书加密(CLE)技术,该技术与图像加密协议相结合,以保护近端控制设备与远程控制设备之间的开放通信路径。我们的方法在探索这些系统的基于ecc的加密方面是独一无二的,在安全性方面具有明显的优势,在保持数据完整性和机密性方面表现出很高的准确性。使用随机Oracle模型严格验证了我们提出的方案的安全性。利用MATLAB进行的仿真从理论和统计两方面对所提出的图像加密协议进行了评估,显示出与现有方法相比具有较强的加密性能。结果显示,熵值为7.98,统一平均变化强度(UACI)为33.4%,归一化像素变化率(NPCR)为99.6%,相关系数为负,表明加解密过程高效有效。
{"title":"A Robust Image Encryption Protocol for Secure Data Sharing in Brain Computer Interface Applications","authors":"Sunil Prajapat;Pankaj Kumar;Kashish Chaudhary;Kranti Kumar;Gyanendra Kumar;Ali Kashif Bashir","doi":"10.1109/OJCS.2025.3587014","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587014","url":null,"abstract":"Brain-computer interface (BCI) technology has emerged as a transformative means to link human neural activity with electronic devices. BCIs, which facilitate bidirectional communication between the brain and computers, are categorized as invasive, semi-invasive, and non-invasive. EEG (electroencephalography), a non-invasive technique recorded via electrodes placed on the scalp, serves as the primary data source for BCI systems. P300, a component of the human brain’s event-related potential, has gained prominence for detecting cognitive responses to stimuli. However, the susceptibility of BCI data to tampering during transmission underscores the critical need for robust security and privacy measures. To address security issues in P300-based BCI systems, this article introduces a novel elliptic curve-based certificateless encryption (CLE) technique integrated with image encryption protocols to safeguard the open communication pathway between near control and remote control devices. Our approach, unique in its exploration of ECC-based encryption for these systems, offers distinct advantages in security, demonstrating high accuracy in preserving data integrity and confidentiality. The security of our proposed scheme is rigorously validated using the Random Oracle Model. Simulations conducted using MATLAB evaluate the proposed image encryption protocol both theoretically and statistically, showing strong encryption performance against recent methods. Results include an entropy value of 7.98, Unified Average Changing Intensity (UACI) of 33.4%, Normalized Pixel Change Rate (NPCR) of 99.6%, and negative correlation coefficient values, indicating efficient and effective encryption and decryption processes.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1190-1201"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072718","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DriftShield: Autonomous Fraud Detection via Actor-Critic Reinforcement Learning With Dynamic Feature Reweighting DriftShield:基于动态特征重加权的Actor-Critic强化学习的自动欺诈检测
Pub Date : 2025-07-08 DOI: 10.1109/OJCS.2025.3587001
Jialei Cao;Wenxia Zheng;Yao Ge;Jiyuan Wang
Financial fraud detection systems confront the persistent challenge of concept drift, where fraudulent patterns evolve continuously to evade detection mechanisms. Traditional rule-based methods and static machine learning models require frequent manual updates, failing to autonomously adapt to emerging fraud strategies. This article presents DriftShield, a novel adaptive fraud detection framework that addresses these limitations through four key technical innovations: (1) the first application of Soft Actor-Critic (SAC) reinforcement learning with continuous action spaces to fraud detection, enabling simultaneous fine-grained optimization of detection thresholds and feature importance weights; (2) a dynamic feature reweighting mechanism that automatically adapts to evolving fraud patterns while providing interpretable insights into changing fraud strategies; (3) an adaptive experience replay buffer combining sliding windows with prioritized sampling to balance catastrophic forgetting prevention with rapid concept drift adaptation; and (4) an entropy-driven exploration framework with automatic temperature tuning that intelligently balances exploitation of known fraud patterns with discovery of emerging threats. Experimental evaluation demonstrates that DriftShield achieves 18% higher fraud detection rates while maintaining lower false positive rates compared to static models. The system demonstrates 57% faster adaptation times, recovering optimal performance within 280 transactions after significant concept drift compared to 650 transactions for the next-best reinforcement learning approach. DriftShield attains a cumulative detection rate of 0.849, representing a 7.7% improvement over existing methods and establishing the efficacy of continuous-action reinforcement learning for autonomous adaptation in dynamic adversarial environments.
金融欺诈检测系统面临着概念漂移的持续挑战,其中欺诈模式不断演变以逃避检测机制。传统的基于规则的方法和静态机器学习模型需要频繁的手动更新,无法自主适应新出现的欺诈策略。本文介绍了DriftShield,一种新的自适应欺诈检测框架,通过四项关键技术创新解决了这些限制:(1)首次将具有连续动作空间的软行为者-批评家(SAC)强化学习应用于欺诈检测,能够同时对检测阈值和特征重要性权重进行细粒度优化;(2)动态特征重加权机制,自动适应不断变化的欺诈模式,同时为不断变化的欺诈策略提供可解释的见解;(3)结合滑动窗口和优先采样的自适应经验回放缓冲,以平衡灾难性遗忘预防和快速概念漂移适应;(4)具有自动温度调节的熵驱动探索框架,可以智能地平衡已知欺诈模式的利用与新出现的威胁的发现。实验评估表明,与静态模型相比,DriftShield的欺诈检测率提高了18%,同时保持了较低的误报率。与次优强化学习方法的650个事务相比,该系统的适应时间快了57%,在显著的概念漂移后,在280个事务内恢复最佳性能。DriftShield的累计检测率为0.849,比现有方法提高了7.7%,并建立了持续行动强化学习在动态对抗环境中自主适应的有效性。
{"title":"DriftShield: Autonomous Fraud Detection via Actor-Critic Reinforcement Learning With Dynamic Feature Reweighting","authors":"Jialei Cao;Wenxia Zheng;Yao Ge;Jiyuan Wang","doi":"10.1109/OJCS.2025.3587001","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587001","url":null,"abstract":"Financial fraud detection systems confront the persistent challenge of concept drift, where fraudulent patterns evolve continuously to evade detection mechanisms. Traditional rule-based methods and static machine learning models require frequent manual updates, failing to autonomously adapt to emerging fraud strategies. This article presents DriftShield, a novel adaptive fraud detection framework that addresses these limitations through four key technical innovations: (1) the first application of Soft Actor-Critic (SAC) reinforcement learning with continuous action spaces to fraud detection, enabling simultaneous fine-grained optimization of detection thresholds and feature importance weights; (2) a dynamic feature reweighting mechanism that automatically adapts to evolving fraud patterns while providing interpretable insights into changing fraud strategies; (3) an adaptive experience replay buffer combining sliding windows with prioritized sampling to balance catastrophic forgetting prevention with rapid concept drift adaptation; and (4) an entropy-driven exploration framework with automatic temperature tuning that intelligently balances exploitation of known fraud patterns with discovery of emerging threats. Experimental evaluation demonstrates that DriftShield achieves 18% higher fraud detection rates while maintaining lower false positive rates compared to static models. The system demonstrates 57% faster adaptation times, recovering optimal performance within 280 transactions after significant concept drift compared to 650 transactions for the next-best reinforcement learning approach. DriftShield attains a cumulative detection rate of 0.849, representing a 7.7% improvement over existing methods and establishing the efficacy of continuous-action reinforcement learning for autonomous adaptation in dynamic adversarial environments.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1166-1177"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072929","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep TPS-PSO: Hybrid Deep Feature Extraction and Global Optimization for Precise 3D MRI Registration 深度TPS-PSO:用于精确3D MRI配准的混合深度特征提取和全局优化
Pub Date : 2025-07-08 DOI: 10.1109/OJCS.2025.3586956
Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik
This article presents TPS-PSO, a hybrid deformable image registration framework integrating deep learning, non-linear transformation modeling, and global optimization for accurate inter-subject, intra-modality 3D brain MRI alignment. The method combines a 3D ResNet encoder to extract volumetric features, a Thin Plate Spline (TPS) model to capture smooth anatomical deformations, and Particle Swarm Optimization (PSO) to estimate transformation parameters efficiently without relying on gradients. Evaluated on the BraTS 2022 dataset, TPS-PSO achieved state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 85.7%, Mutual Information (MI) of 1.23, Target Registration Error (TRE) of 3.8 mm, HD95 of 6.7 mm, and SSIM of 0.92. Comparative experiments against five recent baselines confirmed consistent improvements. Ablation studies and convergence analysis further validated the contribution of each module and the optimization strategy. The proposed framework generates topologically plausible deformation fields and shows strong potential for clinical and research applications in neuroimaging.
本文介绍了TPS-PSO,一种混合可变形图像配准框架,集成了深度学习,非线性转换建模和全局优化,用于精确的主体间,模态内3D脑MRI对齐。该方法结合了三维ResNet编码器提取体积特征,薄板样条(TPS)模型捕获平滑解剖变形,粒子群优化(PSO)算法在不依赖梯度的情况下有效估计变换参数。在BraTS 2022数据集上进行评估,TPS-PSO达到了最先进的性能,骰子相似系数(DSC)为85.7%,互信息(MI)为1.23,目标配准误差(TRE)为3.8 mm, HD95为6.7 mm, SSIM为0.92。与最近五个基线的比较实验证实了持续的改善。消融研究和收敛分析进一步验证了各模块的贡献和优化策略。提出的框架产生拓扑上合理的变形场,并在神经影像学的临床和研究应用中显示出强大的潜力。
{"title":"Deep TPS-PSO: Hybrid Deep Feature Extraction and Global Optimization for Precise 3D MRI Registration","authors":"Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik","doi":"10.1109/OJCS.2025.3586956","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586956","url":null,"abstract":"This article presents TPS-PSO, a hybrid deformable image registration framework integrating deep learning, non-linear transformation modeling, and global optimization for accurate inter-subject, intra-modality 3D brain MRI alignment. The method combines a 3D ResNet encoder to extract volumetric features, a Thin Plate Spline (TPS) model to capture smooth anatomical deformations, and Particle Swarm Optimization (PSO) to estimate transformation parameters efficiently without relying on gradients. Evaluated on the BraTS 2022 dataset, TPS-PSO achieved state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 85.7%, Mutual Information (MI) of 1.23, Target Registration Error (TRE) of 3.8 mm, HD95 of 6.7 mm, and SSIM of 0.92. Comparative experiments against five recent baselines confirmed consistent improvements. Ablation studies and convergence analysis further validated the contribution of each module and the optimization strategy. The proposed framework generates topologically plausible deformation fields and shows strong potential for clinical and research applications in neuroimaging.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1090-1099"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking Variants of the Adam Optimizer for Quantum Machine Learning Applications 针对量子机器学习应用的Adam优化器变体的基准测试
Pub Date : 2025-07-08 DOI: 10.1109/OJCS.2025.3586953
Tuan Hai Vu;Vu Trung Duong Le;Hoai Luan Pham;Yasuhiko Nakashima
Quantum Machine Learning is gaining traction by leveraging quantum advantage to outperform classical Machine Learning. Many classical and quantum optimizers have been proposed to train Parameterized Quantum Circuits in the simulation environment, achieving high accuracy and fast convergence speed. However, to the best of our knowledge, currently there is no related work investigating these optimizers on multiple algorithms, which may lead to the selection of suboptimal optimizers. In this article, we first benchmark the most popular classical and quantum optimizers, such as Gradient Descent (GD), Adaptive Moment Estimation (Adam), and Quantum Natural Gradient Descent (QNG), through the Quantum Compilation algorithm. Evaluated metrics include the lowest cost value and the wall time. The results indicate that Adam outperforms other optimizers in terms of convergence speed, cost value, and stability. Furthermore, we conduct additional experiments on multiple algorithms with Adam variants, demonstrating that the choice of hyperparameters significantly impacts the optimizer’s performance.
量子机器学习正在通过利用量子优势来超越经典机器学习而获得牵引力。为了在仿真环境下训练参数化量子电路,人们提出了许多经典优化器和量子优化器,实现了高精度和快速收敛。然而,据我们所知,目前还没有相关的工作在多种算法上研究这些优化器,这可能会导致次优优化器的选择。在本文中,我们首先通过量子编译算法对最流行的经典优化器和量子优化器进行了基准测试,例如梯度下降(GD),自适应矩估计(Adam)和量子自然梯度下降(QNG)。评估的指标包括最低成本价值和井壁时间。结果表明,Adam在收敛速度、成本值和稳定性方面优于其他优化器。此外,我们对带有Adam变量的多种算法进行了额外的实验,证明超参数的选择显著影响优化器的性能。
{"title":"Benchmarking Variants of the Adam Optimizer for Quantum Machine Learning Applications","authors":"Tuan Hai Vu;Vu Trung Duong Le;Hoai Luan Pham;Yasuhiko Nakashima","doi":"10.1109/OJCS.2025.3586953","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586953","url":null,"abstract":"Quantum Machine Learning is gaining traction by leveraging quantum advantage to outperform classical Machine Learning. Many classical and quantum optimizers have been proposed to train Parameterized Quantum Circuits in the simulation environment, achieving high accuracy and fast convergence speed. However, to the best of our knowledge, currently there is no related work investigating these optimizers on multiple algorithms, which may lead to the selection of suboptimal optimizers. In this article, we first benchmark the most popular classical and quantum optimizers, such as Gradient Descent (GD), Adaptive Moment Estimation (Adam), and Quantum Natural Gradient Descent (QNG), through the Quantum Compilation algorithm. Evaluated metrics include the lowest cost value and the wall time. The results indicate that Adam outperforms other optimizers in terms of convergence speed, cost value, and stability. Furthermore, we conduct additional experiments on multiple algorithms with Adam variants, demonstrating that the choice of hyperparameters significantly impacts the optimizer’s performance.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1146-1154"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072814","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Spectrum Coexistence of NR-V2X and Wi-Fi 6E Using Deep Reinforcement Learning 基于深度强化学习的NR-V2X和Wi-Fi 6E动态频谱共存
Pub Date : 2025-07-07 DOI: 10.1109/OJCS.2025.3586664
Kashish D. Shah;Dhaval K. Patel;Brijesh Soni;Siddhartan Govindasamy;Mehul S. Raval;Mukesh Zaveri
The deployment of 5G NR-based Cellular-V2X, i.e., the NR-V2X standard, is a promising solution to meet the increasing demand for vehicular data transmission in the low-frequency spectrum. The high throughput requirement of NR-V2X users can be overcome by extending it to utilize the sub-6GHz unlicensed spectrum, coexisting with Wi-Fi 6E, thus increasing the overall spectrum availability. Most existing works on coexistence rely on rule-based approaches or classical machine learning algorithms. These approaches may fall short in real-time environments where adaptive decision-making is required. In this context, we introduce a novel Deep Reinforcement learning (DRL) based framework for 5G NR-V2X (mode-1 and mode-2) and Wi-Fi 6E coexistence. We propose an algorithm to dynamically adjust the transmission time of the 5G NR-V2X (for mode-1) or Wi-Fi 6E (for mode-2), based on the Wi-Fi and V2X traffic, to maximize the overall throughput of both systems. The proposed algorithm is implemented through extensive simulations using the Network Simulator-3 (ns-3), integrated with a custom Deep Reinforcement Learning (DRL) framework developed using OpenAIGym. This closed-loop integration enables realistic, dynamic interaction between the learning agent and high-fidelity network environments, representing a novel simulation setup for studying NR-V2X and Wi-Fi coexistence. The results show that when employing DRL on NR-V2X and Wi-Fi coexistence, the average data rates for Vehicular User Equipments (VUEs) and Wi-Fi User Equipments (WUEs) improve by $sim$24% and 23%, respectively, as compared to the static method; and even higher improvement when compared to the existing RL-based LTE-V2X and Wi-Fi coexistence approach. Additionally, we analyzed the impact of NR-V2X coexistence on the Wi-Fi subsystem under mode-1 and mode-2 communications. Our findings indicate that mode-1 communication demands more spectrum resources than mode-2, leading to a performance compromise for Wi-Fi.
部署基于5G nr的蜂窝v2x,即NR-V2X标准,是满足日益增长的车辆低频数据传输需求的一种有前景的解决方案。NR-V2X用户的高吞吐量需求可以通过将其扩展到利用低于6ghz的未授权频谱来克服,与Wi-Fi 6E共存,从而提高整体频谱可用性。大多数现有的共存研究都依赖于基于规则的方法或经典的机器学习算法。这些方法在需要自适应决策的实时环境中可能会有所不足。在此背景下,我们引入了一种基于深度强化学习(DRL)的新型框架,用于5G NR-V2X(模式1和模式2)和Wi-Fi 6E共存。我们提出了一种基于Wi-Fi和V2X流量动态调整5G NR-V2X(模式1)或Wi-Fi 6E(模式2)传输时间的算法,以最大限度地提高两个系统的整体吞吐量。所提出的算法通过使用Network Simulator-3 (ns-3)进行大量模拟来实现,并与使用OpenAIGym开发的定制深度强化学习(DRL)框架集成。这种闭环集成实现了学习代理和高保真网络环境之间的真实动态交互,代表了研究NR-V2X和Wi-Fi共存的新型仿真设置。结果表明,在NR-V2X和Wi-Fi共存的情况下,采用DRL,车辆用户设备(vue)和Wi-Fi用户设备(wue)的平均数据速率分别比静态方法提高了24%和23%;与现有基于rl的LTE-V2X和Wi-Fi共存方法相比,甚至有更高的改进。此外,我们还分析了NR-V2X共存对模式1和模式2通信下Wi-Fi子系统的影响。我们的研究结果表明,模式1通信比模式2需要更多的频谱资源,导致Wi-Fi的性能妥协。
{"title":"Dynamic Spectrum Coexistence of NR-V2X and Wi-Fi 6E Using Deep Reinforcement Learning","authors":"Kashish D. Shah;Dhaval K. Patel;Brijesh Soni;Siddhartan Govindasamy;Mehul S. Raval;Mukesh Zaveri","doi":"10.1109/OJCS.2025.3586664","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586664","url":null,"abstract":"The deployment of 5G NR-based Cellular-V2X, i.e., the NR-V2X standard, is a promising solution to meet the increasing demand for vehicular data transmission in the low-frequency spectrum. The high throughput requirement of NR-V2X users can be overcome by extending it to utilize the sub-6GHz unlicensed spectrum, coexisting with Wi-Fi 6E, thus increasing the overall spectrum availability. Most existing works on coexistence rely on rule-based approaches or classical machine learning algorithms. These approaches may fall short in real-time environments where adaptive decision-making is required. In this context, we introduce a novel Deep Reinforcement learning (DRL) based framework for 5G NR-V2X (mode-1 and mode-2) and Wi-Fi 6E coexistence. We propose an algorithm to dynamically adjust the transmission time of the 5G NR-V2X (for mode-1) or Wi-Fi 6E (for mode-2), based on the Wi-Fi and V2X traffic, to maximize the overall throughput of both systems. The proposed algorithm is implemented through extensive simulations using the Network Simulator-3 (ns-3), integrated with a custom Deep Reinforcement Learning (DRL) framework developed using OpenAIGym. This closed-loop integration enables realistic, dynamic interaction between the learning agent and high-fidelity network environments, representing a novel simulation setup for studying NR-V2X and Wi-Fi coexistence. The results show that when employing DRL on NR-V2X and Wi-Fi coexistence, the average data rates for Vehicular User Equipments (VUEs) and Wi-Fi User Equipments (WUEs) improve by <inline-formula><tex-math>$sim$</tex-math></inline-formula>24% and 23%, respectively, as compared to the static method; and even higher improvement when compared to the existing RL-based LTE-V2X and Wi-Fi coexistence approach. Additionally, we analyzed the impact of NR-V2X coexistence on the Wi-Fi subsystem under mode-1 and mode-2 communications. Our findings indicate that mode-1 communication demands more spectrum resources than mode-2, leading to a performance compromise for Wi-Fi.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1133-1145"},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Open Journal of the Computer Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1