Pub Date : 2025-07-23DOI: 10.1109/OJCS.2025.3592254
Nikolaos Makris;Stamatina K. Koutsileou;Nikolaos Mitrou
Hierarchical Multilabel Classification (HMC) is a challenging task in information retrieval, especially within scientific textbooks, where the objective is to allocate multiple labels adhering to a hierarchical taxonomy. This research presents a new language neutral methodology for HMC to assess documents as normalised weighted distributions of well-defined subjects across hierarchical levels, based on a hierarchical subject term vocabulary. The proposed approach utilizes Bayesian formulas, in contrast to typical methods that depend on machine learning models, thereby obviating the necessity for resource-intensive training processes at various hierarchical levels. The method integrates refined pre-processing techniques, such as natural language processing (NLP) and filtering of non-distinctive terms, to enhance classification accuracy. It employs Bayesian inference along with real time and cached computations across all hierarchical levels, yielding an effective, time-efficient and interpretable classification method while ensuring scalability for large datasets. Experimental results demonstrate the potency of the algorithm to classify scientific textbooks across hierarchical subject tiers with significant precision and recall and retrieve semantically related scientific textbooks, thereby verifying its efficacy in tasks requiring hierarchical subject classification. This study presents a streamlined, interpretable alternative to model-dependent HMC approaches, rendering it particularly appropriate for real-world applications in educational and scientific fields. Furthermore, in the context of the present study, two public Web User Interfaces were published, the first is founded on Skosmos to illustrate the hierarchical structure of the subject term vocabulary, while the second one employs the HMC method to present in real-time the classification between subjects in English and Greek textual data.
{"title":"A Probabilistic Method for Hierarchical Multisubject Classification of Documents Based on Multilingual Subject Term Vocabularies","authors":"Nikolaos Makris;Stamatina K. Koutsileou;Nikolaos Mitrou","doi":"10.1109/OJCS.2025.3592254","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3592254","url":null,"abstract":"Hierarchical Multilabel Classification (HMC) is a challenging task in information retrieval, especially within scientific textbooks, where the objective is to allocate multiple labels adhering to a hierarchical taxonomy. This research presents a new language neutral methodology for HMC to assess documents as normalised weighted distributions of well-defined subjects across hierarchical levels, based on a hierarchical subject term vocabulary. The proposed approach utilizes Bayesian formulas, in contrast to typical methods that depend on machine learning models, thereby obviating the necessity for resource-intensive training processes at various hierarchical levels. The method integrates refined pre-processing techniques, such as natural language processing (NLP) and filtering of non-distinctive terms, to enhance classification accuracy. It employs Bayesian inference along with real time and cached computations across all hierarchical levels, yielding an effective, time-efficient and interpretable classification method while ensuring scalability for large datasets. Experimental results demonstrate the potency of the algorithm to classify scientific textbooks across hierarchical subject tiers with significant precision and recall and retrieve semantically related scientific textbooks, thereby verifying its efficacy in tasks requiring hierarchical subject classification. This study presents a streamlined, interpretable alternative to model-dependent HMC approaches, rendering it particularly appropriate for real-world applications in educational and scientific fields. Furthermore, in the context of the present study, two public Web User Interfaces were published, the first is founded on Skosmos to illustrate the hierarchical structure of the subject term vocabulary, while the second one employs the HMC method to present in real-time the classification between subjects in English and Greek textual data.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1294-1305"},"PeriodicalIF":0.0,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11095338","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144880509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1109/OJCS.2025.3589638
Yuntao Wang;Yanghe Pan;Shaolong Guo;Zhou Su
With the rise of large language and vision-language models, AI agents have evolved into autonomous, interactive systems capable of perception, reasoning, and decision-making. As they proliferate across virtual and physical domains, the Internet of Agents (IoA) has emerged as a key infrastructure for enabling scalable and secure coordination among heterogeneous agents. This survey offers a comprehensive examination of the security and privacy landscape in IoA systems. We begin by outlining the IoA architecture and its distinct vulnerabilities compared to traditional networks, focusing on four critical aspects: identity authentication threats, cross-agent trust issues, embodied security, and privacy risks. We then review existing and emerging defense mechanisms and highlight persistent challenges. Finally, we identify open research directions to advance the development of resilient and privacy-preserving IoA ecosystems.
{"title":"Security of Internet of Agents: Attacks and Countermeasures","authors":"Yuntao Wang;Yanghe Pan;Shaolong Guo;Zhou Su","doi":"10.1109/OJCS.2025.3589638","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3589638","url":null,"abstract":"With the rise of large language and vision-language models, AI agents have evolved into autonomous, interactive systems capable of perception, reasoning, and decision-making. As they proliferate across virtual and physical domains, the Internet of Agents (IoA) has emerged as a key infrastructure for enabling scalable and secure coordination among heterogeneous agents. This survey offers a comprehensive examination of the security and privacy landscape in IoA systems. We begin by outlining the IoA architecture and its distinct vulnerabilities compared to traditional networks, focusing on four critical aspects: identity authentication threats, cross-agent trust issues, embodied security, and privacy risks. We then review existing and emerging defense mechanisms and highlight persistent challenges. Finally, we identify open research directions to advance the development of resilient and privacy-preserving IoA ecosystems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1611-1624"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11081880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145351891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-16DOI: 10.1109/OJCS.2025.3589948
Muhammad Ashraf;Adnan Nadeem;Oussama Benrhouma;Muhammad Sarim;Kashif Rizwan;Amir Mehmood
Several watermarking techniques have been suggested to safeguard the integrity of transmitted images in public video surveillance applications. However, these techniques have a critical drawback in their embedding schemes: the watermark is limited to residing in a narrow traceable space to avoid fidelity issues. Such a protection layer can be evaluated or forcefully removed to breach data security. Once the protection layer (watermark) is removed, a watermarking algorithm cannot pinpoint the falsified regions in affected images and gives a binary answer. Consequently, attackers can present the falsification of visual elements as a non-malicious perturbation. Such a type of attack poses a serious security challenge. This study introduces a novel cross-channel image watermarking technique that randomly scatters the watermark pattern across a 24-bit image structure so that no emergence of embedding signatures and fidelity issues occurs after the process. Chaotic systems are employed to leverage their sensitivity to initial conditions and control parameters, resulting in high confusion and diffusion properties in the proposed scheme. The protection layer is completely intractable as it is randomly scattered in the entire RGB space, making it very hard to remove without leaving a clear footprint in affected images. This method creates a good balance between security and imperceptibility, it effectively detects and localizes falsified regions in tampered images, and maintains this ability until clear evidence of a removal attempt emerges in histograms. This property makes proposed algorithm a preferred choice for data integrity protection; it achieved an average F1-score of 0.97 for tamper detection.
{"title":"A Robust Cross-Channel Image Watermarking Technique for Tamper Detection and its Precise Localization","authors":"Muhammad Ashraf;Adnan Nadeem;Oussama Benrhouma;Muhammad Sarim;Kashif Rizwan;Amir Mehmood","doi":"10.1109/OJCS.2025.3589948","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3589948","url":null,"abstract":"Several watermarking techniques have been suggested to safeguard the integrity of transmitted images in public video surveillance applications. However, these techniques have a critical drawback in their embedding schemes: the watermark is limited to residing in a narrow traceable space to avoid fidelity issues. Such a protection layer can be evaluated or forcefully removed to breach data security. Once the protection layer (watermark) is removed, a watermarking algorithm cannot pinpoint the falsified regions in affected images and gives a binary answer. Consequently, attackers can present the falsification of visual elements as a non-malicious perturbation. Such a type of attack poses a serious security challenge. This study introduces a novel cross-channel image watermarking technique that randomly scatters the watermark pattern across a 24-bit image structure so that no emergence of embedding signatures and fidelity issues occurs after the process. Chaotic systems are employed to leverage their sensitivity to initial conditions and control parameters, resulting in high confusion and diffusion properties in the proposed scheme. The protection layer is completely intractable as it is randomly scattered in the entire RGB space, making it very hard to remove without leaving a clear footprint in affected images. This method creates a good balance between security and imperceptibility, it effectively detects and localizes falsified regions in tampered images, and maintains this ability until clear evidence of a removal attempt emerges in histograms. This property makes proposed algorithm a preferred choice for data integrity protection; it achieved an average F1-score of 0.97 for tamper detection.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1202-1213"},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11081476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144750810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10DOI: 10.1109/OJCS.2025.3587486
Safa Ben Atitallah;Maha Driss;Wadii Boulila;Anis Koubaa
The Industrial Internet of Things (IIoT) faces significant cybersecurity threats due to its ever-changing network structures, diverse data sources, and inherent uncertainties, making robust intrusion detection crucial. Conventional machine learning methods and typical Graph Neural Networks (GNNs) often struggle to capture the complexity and uncertainty in IIoT network traffic, which hampers their effectiveness in detecting intrusions. To address these limitations, we propose the Fuzzy Graph Attention Network (FGATN), a novel intrusion detection framework that fuses fuzzy logic, graph attention mechanisms, and GNNs to deliver high accuracy and robustness in IIoT environments. FGATN introduces three core innovations: (1) fuzzy membership functions to explicitly model uncertainty and imprecision in traffic features; (2) fuzzy similarity-based graph construction with adaptive edge pruning to build meaningful graph topologies that reflect real-world communication patterns; and (3) an attention-guided fuzzy graph convolution mechanism that dynamically prioritizes reliable and task-relevant neighbors during message passing. We evaluate FGATN on three public intrusion datasets, Edge-IIoTSet, WSN-DS, and CIC-Malmem-2022, achieving accuracies of 99.07%, 99.20%, and 99.05%, respectively. It consistently outperforms state-of-the-art GNN (GCN, GraphSAGE, FGCN) and deep learning models (DNN, GRU, RobustCBL). Ablation studies confirm the essential roles of both fuzzy logic and attention mechanisms in boosting detection accuracy. Furthermore, FGATN demonstrates strong scalability, maintaining high performance across a range of varying graph sizes. These results highlight FGATN as a robust and scalable solution for next-generation IIoT intrusion detection systems.
{"title":"Securing Industrial IoT Environments: A Fuzzy Graph Attention Network for Robust Intrusion Detection","authors":"Safa Ben Atitallah;Maha Driss;Wadii Boulila;Anis Koubaa","doi":"10.1109/OJCS.2025.3587486","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587486","url":null,"abstract":"The Industrial Internet of Things (IIoT) faces significant cybersecurity threats due to its ever-changing network structures, diverse data sources, and inherent uncertainties, making robust intrusion detection crucial. Conventional machine learning methods and typical Graph Neural Networks (GNNs) often struggle to capture the complexity and uncertainty in IIoT network traffic, which hampers their effectiveness in detecting intrusions. To address these limitations, we propose the Fuzzy Graph Attention Network (FGATN), a novel intrusion detection framework that fuses fuzzy logic, graph attention mechanisms, and GNNs to deliver high accuracy and robustness in IIoT environments. FGATN introduces three core innovations: (1) fuzzy membership functions to explicitly model uncertainty and imprecision in traffic features; (2) fuzzy similarity-based graph construction with adaptive edge pruning to build meaningful graph topologies that reflect real-world communication patterns; and (3) an attention-guided fuzzy graph convolution mechanism that dynamically prioritizes reliable and task-relevant neighbors during message passing. We evaluate FGATN on three public intrusion datasets, Edge-IIoTSet, WSN-DS, and CIC-Malmem-2022, achieving accuracies of 99.07%, 99.20%, and 99.05%, respectively. It consistently outperforms state-of-the-art GNN (GCN, GraphSAGE, FGCN) and deep learning models (DNN, GRU, RobustCBL). Ablation studies confirm the essential roles of both fuzzy logic and attention mechanisms in boosting detection accuracy. Furthermore, FGATN demonstrates strong scalability, maintaining high performance across a range of varying graph sizes. These results highlight FGATN as a robust and scalable solution for next-generation IIoT intrusion detection systems.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1065-1076"},"PeriodicalIF":0.0,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11075530","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/OJCS.2025.3587005
Seongho Kim;Jihyun Moon;Juntaek Oh;Insu Choi;Joon-Sung Yang
Large language models (LLMs), which have emerged from advances in natural language processing (NLP), enable chatbots, virtual assistants, and numerous domain-specific applications. These models, often comprising billions of parameters, leverage the Transformer architecture and Attention mechanisms to process context effectively and address long-term dependencies more efficiently than earlier approaches, such as recurrent neural networks (RNNs). Notably, since the introduction of Llama, the architectural development of LLMs has significantly converged, predominantly settling on a Transformer-based decoder-only architecture. The evolution of LLMs has been driven by advances in high-bandwidth memory, specialized accelerators, and optimized architectures, enabling models to scale to billions of parameters. However, it also introduces new challenges: meeting compute and memory efficiency requirements across diverse deployment targets, ranging from data center servers to resource-constrained edge devices. To address these challenges, we survey the evolution of LLMs at two complementary levels: architectural trends and their underlying operational mechanisms. Furthermore, we quantify how hyperparameter settings influence inference latency by profiling kernel-level execution on a modern GPU architecture. Our findings reveal that identical models can exhibit varying performance based on hyperparameter configurations and deployment contexts, emphasizing the need for scalable and efficient solutions. The insights distilled from this analysis guide the optimization of performance and efficiency within these converged LLM architectures, thereby extending their applicability across a broader range of environments.
{"title":"Survey and Evaluation of Converging Architecture in LLMs Based on Footsteps of Operations","authors":"Seongho Kim;Jihyun Moon;Juntaek Oh;Insu Choi;Joon-Sung Yang","doi":"10.1109/OJCS.2025.3587005","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587005","url":null,"abstract":"Large language models (LLMs), which have emerged from advances in natural language processing (NLP), enable chatbots, virtual assistants, and numerous domain-specific applications. These models, often comprising billions of parameters, leverage the Transformer architecture and Attention mechanisms to process context effectively and address long-term dependencies more efficiently than earlier approaches, such as recurrent neural networks (RNNs). Notably, since the introduction of Llama, the architectural development of LLMs has significantly converged, predominantly settling on a Transformer-based decoder-only architecture. The evolution of LLMs has been driven by advances in high-bandwidth memory, specialized accelerators, and optimized architectures, enabling models to scale to billions of parameters. However, it also introduces new challenges: meeting compute and memory efficiency requirements across diverse deployment targets, ranging from data center servers to resource-constrained edge devices. To address these challenges, we survey the evolution of LLMs at two complementary levels: architectural trends and their underlying operational mechanisms. Furthermore, we quantify how hyperparameter settings influence inference latency by profiling kernel-level execution on a modern GPU architecture. Our findings reveal that identical models can exhibit varying performance based on hyperparameter configurations and deployment contexts, emphasizing the need for scalable and efficient solutions. The insights distilled from this analysis guide the optimization of performance and efficiency within these converged LLM architectures, thereby extending their applicability across a broader range of environments.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1214-1226"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072851","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain-computer interface (BCI) technology has emerged as a transformative means to link human neural activity with electronic devices. BCIs, which facilitate bidirectional communication between the brain and computers, are categorized as invasive, semi-invasive, and non-invasive. EEG (electroencephalography), a non-invasive technique recorded via electrodes placed on the scalp, serves as the primary data source for BCI systems. P300, a component of the human brain’s event-related potential, has gained prominence for detecting cognitive responses to stimuli. However, the susceptibility of BCI data to tampering during transmission underscores the critical need for robust security and privacy measures. To address security issues in P300-based BCI systems, this article introduces a novel elliptic curve-based certificateless encryption (CLE) technique integrated with image encryption protocols to safeguard the open communication pathway between near control and remote control devices. Our approach, unique in its exploration of ECC-based encryption for these systems, offers distinct advantages in security, demonstrating high accuracy in preserving data integrity and confidentiality. The security of our proposed scheme is rigorously validated using the Random Oracle Model. Simulations conducted using MATLAB evaluate the proposed image encryption protocol both theoretically and statistically, showing strong encryption performance against recent methods. Results include an entropy value of 7.98, Unified Average Changing Intensity (UACI) of 33.4%, Normalized Pixel Change Rate (NPCR) of 99.6%, and negative correlation coefficient values, indicating efficient and effective encryption and decryption processes.
{"title":"A Robust Image Encryption Protocol for Secure Data Sharing in Brain Computer Interface Applications","authors":"Sunil Prajapat;Pankaj Kumar;Kashish Chaudhary;Kranti Kumar;Gyanendra Kumar;Ali Kashif Bashir","doi":"10.1109/OJCS.2025.3587014","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587014","url":null,"abstract":"Brain-computer interface (BCI) technology has emerged as a transformative means to link human neural activity with electronic devices. BCIs, which facilitate bidirectional communication between the brain and computers, are categorized as invasive, semi-invasive, and non-invasive. EEG (electroencephalography), a non-invasive technique recorded via electrodes placed on the scalp, serves as the primary data source for BCI systems. P300, a component of the human brain’s event-related potential, has gained prominence for detecting cognitive responses to stimuli. However, the susceptibility of BCI data to tampering during transmission underscores the critical need for robust security and privacy measures. To address security issues in P300-based BCI systems, this article introduces a novel elliptic curve-based certificateless encryption (CLE) technique integrated with image encryption protocols to safeguard the open communication pathway between near control and remote control devices. Our approach, unique in its exploration of ECC-based encryption for these systems, offers distinct advantages in security, demonstrating high accuracy in preserving data integrity and confidentiality. The security of our proposed scheme is rigorously validated using the Random Oracle Model. Simulations conducted using MATLAB evaluate the proposed image encryption protocol both theoretically and statistically, showing strong encryption performance against recent methods. Results include an entropy value of 7.98, Unified Average Changing Intensity (UACI) of 33.4%, Normalized Pixel Change Rate (NPCR) of 99.6%, and negative correlation coefficient values, indicating efficient and effective encryption and decryption processes.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1190-1201"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072718","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144782052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/OJCS.2025.3587001
Jialei Cao;Wenxia Zheng;Yao Ge;Jiyuan Wang
Financial fraud detection systems confront the persistent challenge of concept drift, where fraudulent patterns evolve continuously to evade detection mechanisms. Traditional rule-based methods and static machine learning models require frequent manual updates, failing to autonomously adapt to emerging fraud strategies. This article presents DriftShield, a novel adaptive fraud detection framework that addresses these limitations through four key technical innovations: (1) the first application of Soft Actor-Critic (SAC) reinforcement learning with continuous action spaces to fraud detection, enabling simultaneous fine-grained optimization of detection thresholds and feature importance weights; (2) a dynamic feature reweighting mechanism that automatically adapts to evolving fraud patterns while providing interpretable insights into changing fraud strategies; (3) an adaptive experience replay buffer combining sliding windows with prioritized sampling to balance catastrophic forgetting prevention with rapid concept drift adaptation; and (4) an entropy-driven exploration framework with automatic temperature tuning that intelligently balances exploitation of known fraud patterns with discovery of emerging threats. Experimental evaluation demonstrates that DriftShield achieves 18% higher fraud detection rates while maintaining lower false positive rates compared to static models. The system demonstrates 57% faster adaptation times, recovering optimal performance within 280 transactions after significant concept drift compared to 650 transactions for the next-best reinforcement learning approach. DriftShield attains a cumulative detection rate of 0.849, representing a 7.7% improvement over existing methods and establishing the efficacy of continuous-action reinforcement learning for autonomous adaptation in dynamic adversarial environments.
{"title":"DriftShield: Autonomous Fraud Detection via Actor-Critic Reinforcement Learning With Dynamic Feature Reweighting","authors":"Jialei Cao;Wenxia Zheng;Yao Ge;Jiyuan Wang","doi":"10.1109/OJCS.2025.3587001","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3587001","url":null,"abstract":"Financial fraud detection systems confront the persistent challenge of concept drift, where fraudulent patterns evolve continuously to evade detection mechanisms. Traditional rule-based methods and static machine learning models require frequent manual updates, failing to autonomously adapt to emerging fraud strategies. This article presents DriftShield, a novel adaptive fraud detection framework that addresses these limitations through four key technical innovations: (1) the first application of Soft Actor-Critic (SAC) reinforcement learning with continuous action spaces to fraud detection, enabling simultaneous fine-grained optimization of detection thresholds and feature importance weights; (2) a dynamic feature reweighting mechanism that automatically adapts to evolving fraud patterns while providing interpretable insights into changing fraud strategies; (3) an adaptive experience replay buffer combining sliding windows with prioritized sampling to balance catastrophic forgetting prevention with rapid concept drift adaptation; and (4) an entropy-driven exploration framework with automatic temperature tuning that intelligently balances exploitation of known fraud patterns with discovery of emerging threats. Experimental evaluation demonstrates that DriftShield achieves 18% higher fraud detection rates while maintaining lower false positive rates compared to static models. The system demonstrates 57% faster adaptation times, recovering optimal performance within 280 transactions after significant concept drift compared to 650 transactions for the next-best reinforcement learning approach. DriftShield attains a cumulative detection rate of 0.849, representing a 7.7% improvement over existing methods and establishing the efficacy of continuous-action reinforcement learning for autonomous adaptation in dynamic adversarial environments.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1166-1177"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072929","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/OJCS.2025.3586956
Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik
This article presents TPS-PSO, a hybrid deformable image registration framework integrating deep learning, non-linear transformation modeling, and global optimization for accurate inter-subject, intra-modality 3D brain MRI alignment. The method combines a 3D ResNet encoder to extract volumetric features, a Thin Plate Spline (TPS) model to capture smooth anatomical deformations, and Particle Swarm Optimization (PSO) to estimate transformation parameters efficiently without relying on gradients. Evaluated on the BraTS 2022 dataset, TPS-PSO achieved state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 85.7%, Mutual Information (MI) of 1.23, Target Registration Error (TRE) of 3.8 mm, HD95 of 6.7 mm, and SSIM of 0.92. Comparative experiments against five recent baselines confirmed consistent improvements. Ablation studies and convergence analysis further validated the contribution of each module and the optimization strategy. The proposed framework generates topologically plausible deformation fields and shows strong potential for clinical and research applications in neuroimaging.
{"title":"Deep TPS-PSO: Hybrid Deep Feature Extraction and Global Optimization for Precise 3D MRI Registration","authors":"Gayathri Ramasamy;Tripty Singh;Xiaohui Yuan;Ganesh R Naik","doi":"10.1109/OJCS.2025.3586956","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586956","url":null,"abstract":"This article presents TPS-PSO, a hybrid deformable image registration framework integrating deep learning, non-linear transformation modeling, and global optimization for accurate inter-subject, intra-modality 3D brain MRI alignment. The method combines a 3D ResNet encoder to extract volumetric features, a Thin Plate Spline (TPS) model to capture smooth anatomical deformations, and Particle Swarm Optimization (PSO) to estimate transformation parameters efficiently without relying on gradients. Evaluated on the BraTS 2022 dataset, TPS-PSO achieved state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 85.7%, Mutual Information (MI) of 1.23, Target Registration Error (TRE) of 3.8 mm, HD95 of 6.7 mm, and SSIM of 0.92. Comparative experiments against five recent baselines confirmed consistent improvements. Ablation studies and convergence analysis further validated the contribution of each module and the optimization strategy. The proposed framework generates topologically plausible deformation fields and shows strong potential for clinical and research applications in neuroimaging.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1090-1099"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144657376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-08DOI: 10.1109/OJCS.2025.3586953
Tuan Hai Vu;Vu Trung Duong Le;Hoai Luan Pham;Yasuhiko Nakashima
Quantum Machine Learning is gaining traction by leveraging quantum advantage to outperform classical Machine Learning. Many classical and quantum optimizers have been proposed to train Parameterized Quantum Circuits in the simulation environment, achieving high accuracy and fast convergence speed. However, to the best of our knowledge, currently there is no related work investigating these optimizers on multiple algorithms, which may lead to the selection of suboptimal optimizers. In this article, we first benchmark the most popular classical and quantum optimizers, such as Gradient Descent (GD), Adaptive Moment Estimation (Adam), and Quantum Natural Gradient Descent (QNG), through the Quantum Compilation algorithm. Evaluated metrics include the lowest cost value and the wall time. The results indicate that Adam outperforms other optimizers in terms of convergence speed, cost value, and stability. Furthermore, we conduct additional experiments on multiple algorithms with Adam variants, demonstrating that the choice of hyperparameters significantly impacts the optimizer’s performance.
{"title":"Benchmarking Variants of the Adam Optimizer for Quantum Machine Learning Applications","authors":"Tuan Hai Vu;Vu Trung Duong Le;Hoai Luan Pham;Yasuhiko Nakashima","doi":"10.1109/OJCS.2025.3586953","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586953","url":null,"abstract":"Quantum Machine Learning is gaining traction by leveraging quantum advantage to outperform classical Machine Learning. Many classical and quantum optimizers have been proposed to train Parameterized Quantum Circuits in the simulation environment, achieving high accuracy and fast convergence speed. However, to the best of our knowledge, currently there is no related work investigating these optimizers on multiple algorithms, which may lead to the selection of suboptimal optimizers. In this article, we first benchmark the most popular classical and quantum optimizers, such as Gradient Descent (GD), Adaptive Moment Estimation (Adam), and Quantum Natural Gradient Descent (QNG), through the Quantum Compilation algorithm. Evaluated metrics include the lowest cost value and the wall time. The results indicate that Adam outperforms other optimizers in terms of convergence speed, cost value, and stability. Furthermore, we conduct additional experiments on multiple algorithms with Adam variants, demonstrating that the choice of hyperparameters significantly impacts the optimizer’s performance.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1146-1154"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072814","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-07DOI: 10.1109/OJCS.2025.3586664
Kashish D. Shah;Dhaval K. Patel;Brijesh Soni;Siddhartan Govindasamy;Mehul S. Raval;Mukesh Zaveri
The deployment of 5G NR-based Cellular-V2X, i.e., the NR-V2X standard, is a promising solution to meet the increasing demand for vehicular data transmission in the low-frequency spectrum. The high throughput requirement of NR-V2X users can be overcome by extending it to utilize the sub-6GHz unlicensed spectrum, coexisting with Wi-Fi 6E, thus increasing the overall spectrum availability. Most existing works on coexistence rely on rule-based approaches or classical machine learning algorithms. These approaches may fall short in real-time environments where adaptive decision-making is required. In this context, we introduce a novel Deep Reinforcement learning (DRL) based framework for 5G NR-V2X (mode-1 and mode-2) and Wi-Fi 6E coexistence. We propose an algorithm to dynamically adjust the transmission time of the 5G NR-V2X (for mode-1) or Wi-Fi 6E (for mode-2), based on the Wi-Fi and V2X traffic, to maximize the overall throughput of both systems. The proposed algorithm is implemented through extensive simulations using the Network Simulator-3 (ns-3), integrated with a custom Deep Reinforcement Learning (DRL) framework developed using OpenAIGym. This closed-loop integration enables realistic, dynamic interaction between the learning agent and high-fidelity network environments, representing a novel simulation setup for studying NR-V2X and Wi-Fi coexistence. The results show that when employing DRL on NR-V2X and Wi-Fi coexistence, the average data rates for Vehicular User Equipments (VUEs) and Wi-Fi User Equipments (WUEs) improve by $sim$24% and 23%, respectively, as compared to the static method; and even higher improvement when compared to the existing RL-based LTE-V2X and Wi-Fi coexistence approach. Additionally, we analyzed the impact of NR-V2X coexistence on the Wi-Fi subsystem under mode-1 and mode-2 communications. Our findings indicate that mode-1 communication demands more spectrum resources than mode-2, leading to a performance compromise for Wi-Fi.
{"title":"Dynamic Spectrum Coexistence of NR-V2X and Wi-Fi 6E Using Deep Reinforcement Learning","authors":"Kashish D. Shah;Dhaval K. Patel;Brijesh Soni;Siddhartan Govindasamy;Mehul S. Raval;Mukesh Zaveri","doi":"10.1109/OJCS.2025.3586664","DOIUrl":"https://doi.org/10.1109/OJCS.2025.3586664","url":null,"abstract":"The deployment of 5G NR-based Cellular-V2X, i.e., the NR-V2X standard, is a promising solution to meet the increasing demand for vehicular data transmission in the low-frequency spectrum. The high throughput requirement of NR-V2X users can be overcome by extending it to utilize the sub-6GHz unlicensed spectrum, coexisting with Wi-Fi 6E, thus increasing the overall spectrum availability. Most existing works on coexistence rely on rule-based approaches or classical machine learning algorithms. These approaches may fall short in real-time environments where adaptive decision-making is required. In this context, we introduce a novel Deep Reinforcement learning (DRL) based framework for 5G NR-V2X (mode-1 and mode-2) and Wi-Fi 6E coexistence. We propose an algorithm to dynamically adjust the transmission time of the 5G NR-V2X (for mode-1) or Wi-Fi 6E (for mode-2), based on the Wi-Fi and V2X traffic, to maximize the overall throughput of both systems. The proposed algorithm is implemented through extensive simulations using the Network Simulator-3 (ns-3), integrated with a custom Deep Reinforcement Learning (DRL) framework developed using OpenAIGym. This closed-loop integration enables realistic, dynamic interaction between the learning agent and high-fidelity network environments, representing a novel simulation setup for studying NR-V2X and Wi-Fi coexistence. The results show that when employing DRL on NR-V2X and Wi-Fi coexistence, the average data rates for Vehicular User Equipments (VUEs) and Wi-Fi User Equipments (WUEs) improve by <inline-formula><tex-math>$sim$</tex-math></inline-formula>24% and 23%, respectively, as compared to the static method; and even higher improvement when compared to the existing RL-based LTE-V2X and Wi-Fi coexistence approach. Additionally, we analyzed the impact of NR-V2X coexistence on the Wi-Fi subsystem under mode-1 and mode-2 communications. Our findings indicate that mode-1 communication demands more spectrum resources than mode-2, leading to a performance compromise for Wi-Fi.","PeriodicalId":13205,"journal":{"name":"IEEE Open Journal of the Computer Society","volume":"6 ","pages":"1133-1145"},"PeriodicalIF":0.0,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11072315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144716220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}