Pub Date : 2024-12-18DOI: 10.1109/TNSE.2024.3519624
Xudong Zhao;Wei Xing;Xinyu Wang;Ning Zhao
This paper considers the security of cyber-physical systems (CPSs) subject to replay attacks with measurements of the sensor transmitted to the remote estimator over a wireless communication network. We present a novel stochastic event-triggered feedback physical watermarking technique to effectively mitigate the impact of replay attacks, while simultaneously addressing the performance degradation caused by the inclusion of physical watermarks. This innovative approach dynamically adjusts the probability of adding physical watermarks based on the system's current operational state, striking a balance between optimal performance and effective countermeasures against replay attacks. And, as a result, the probability of adding physical watermarks increases when the system is subjected to malicious replay attacks. Furthermore, the performances of both the system and the detector are thoroughly characterized in two distinct scenarios: (i) the system operating under normal conditions, and (ii) the system being subjected to replay attacks. These scenarios allow for a comprehensive evaluation of the system's capabilities and the detector's efficacy in detecting and mitigating potential security threats. Finally, simulation examples are provided to corroborate and illustrate the theoretical results.
{"title":"Stochastic Event-Triggered Feedback Physical Watermarks Against Replay Attacks","authors":"Xudong Zhao;Wei Xing;Xinyu Wang;Ning Zhao","doi":"10.1109/TNSE.2024.3519624","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3519624","url":null,"abstract":"This paper considers the security of cyber-physical systems (CPSs) subject to replay attacks with measurements of the sensor transmitted to the remote estimator over a wireless communication network. We present a novel stochastic event-triggered feedback physical watermarking technique to effectively mitigate the impact of replay attacks, while simultaneously addressing the performance degradation caused by the inclusion of physical watermarks. This innovative approach dynamically adjusts the probability of adding physical watermarks based on the system's current operational state, striking a balance between optimal performance and effective countermeasures against replay attacks. And, as a result, the probability of adding physical watermarks increases when the system is subjected to malicious replay attacks. Furthermore, the performances of both the system and the detector are thoroughly characterized in two distinct scenarios: (i) the system operating under normal conditions, and (ii) the system being subjected to replay attacks. These scenarios allow for a comprehensive evaluation of the system's capabilities and the detector's efficacy in detecting and mitigating potential security threats. Finally, simulation examples are provided to corroborate and illustrate the theoretical results.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"814-822"},"PeriodicalIF":6.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine learning (ML)-based network intrusion detection systems (NIDS) have become a prospective approach to efficiently protect network communications. However, ML models can be exploited by adversarial poisonings, like Random Label Manipulation (RLM), which can compromise multi-controller software-defined network (MSDN) operations. In this paper, we develop the Trans-controller Adversarial Perturbation Detection (TAPD) framework for NIDS for MSDNs. The detection framework takes advantage of the MSDN architecture and focuses on periodic transference of ML-based NIDS models across the SDN controllers in the topology, and validates the models using local datasets to calculate error rates. We demonstrate the efficacy of this framework in detecting RLM attacks in an MSDN setup. Results indicate efficient detection performance by the TAPD framework in determining the presence of RLM attacks and the localization of the compromised controllers. We find that the framework works well even when there is a significant number of compromised agents. However, the performance begins to deteriorate when more than 40% of the SDN controllers have become compromised.
{"title":"Bringing to Light: Adversarial Poisoning Detection for ML-Based IDS in Software-Defined Networks","authors":"Tapadhir Das;Raj Mani Shukla;Suman Rath;Shamik Sengupta","doi":"10.1109/TNSE.2024.3519515","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3519515","url":null,"abstract":"Machine learning (ML)-based network intrusion detection systems (NIDS) have become a prospective approach to efficiently protect network communications. However, ML models can be exploited by adversarial poisonings, like Random Label Manipulation (RLM), which can compromise multi-controller software-defined network (MSDN) operations. In this paper, we develop the Trans-controller Adversarial Perturbation Detection (TAPD) framework for NIDS for MSDNs. The detection framework takes advantage of the MSDN architecture and focuses on periodic transference of ML-based NIDS models across the SDN controllers in the topology, and validates the models using local datasets to calculate error rates. We demonstrate the efficacy of this framework in detecting RLM attacks in an MSDN setup. Results indicate efficient detection performance by the TAPD framework in determining the presence of RLM attacks and the localization of the compromised controllers. We find that the framework works well even when there is a significant number of compromised agents. However, the performance begins to deteriorate when more than 40% of the SDN controllers have become compromised.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"791-802"},"PeriodicalIF":6.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1109/TNSE.2024.3519155
Fengrui Xiao;Shuangwu Chen;Siyang Chen;Yuanyi Ma;Huasen He;Jian Yang
Insider threats have become a prominent driver behind a myriad of cybersecurity incidents in recent years. Since the threats take place within intranet, traditional security devices located at the network perimeter can hardly detect them. The trust management methods employed within the organization are likewise incapable of intercepting access actions already authenticated with valid credentials. In this paper, we propose a novel insider threat detection method named SENTINEL, which identifies abnormal behavior of insiders and provides fine-grained threat intelligence. We devise a dynamic user behavior interaction graph (BIG), which jointly considers the spatial distribution of user behavioral trajectories among the network topology and the temporal variations of user behavioral profiles. By incorporating a spatio-temporal graph neural network, SENTINEL is able to learn the operation regularities of users at specific times and respective positions in BIG. In order to perceive both the abrupt and persistent threats simultaneously, we conceive a multi-timescale fusion mechanism for detecting users' activities at different timescales. SENTINEL implements a log-entry-level detection without requiring any attack samples during model training. The experiments conducted on widely-used public datasets demonstrate that SENTINEL achieves superior performance while maintaining a relatively low computational overhead compared to the state-of-the-art methods.
{"title":"SENTINEL: Insider Threat Detection Based on Multi-Timescale User Behavior Interaction Graph Learning","authors":"Fengrui Xiao;Shuangwu Chen;Siyang Chen;Yuanyi Ma;Huasen He;Jian Yang","doi":"10.1109/TNSE.2024.3519155","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3519155","url":null,"abstract":"Insider threats have become a prominent driver behind a myriad of cybersecurity incidents in recent years. Since the threats take place within intranet, traditional security devices located at the network perimeter can hardly detect them. The trust management methods employed within the organization are likewise incapable of intercepting access actions already authenticated with valid credentials. In this paper, we propose a novel insider threat detection method named SENTINEL, which identifies abnormal behavior of insiders and provides fine-grained threat intelligence. We devise a dynamic user behavior interaction graph (BIG), which jointly considers the spatial distribution of user behavioral trajectories among the network topology and the temporal variations of user behavioral profiles. By incorporating a spatio-temporal graph neural network, SENTINEL is able to learn the operation regularities of users at specific times and respective positions in BIG. In order to perceive both the abrupt and persistent threats simultaneously, we conceive a multi-timescale fusion mechanism for detecting users' activities at different timescales. SENTINEL implements a log-entry-level detection without requiring any attack samples during model training. The experiments conducted on widely-used public datasets demonstrate that SENTINEL achieves superior performance while maintaining a relatively low computational overhead compared to the state-of-the-art methods.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"774-790"},"PeriodicalIF":6.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-16DOI: 10.1109/TNSE.2024.3517872
Di Wu;Zhuang Cao;Xudong Lin;Feng Shu;Zikai Feng
In multi-unmanned aerial vehicle (UAV) systems cooperative navigation under the communication coverage of ground base stations, UAVs encounter challenges in maintaining reliable communication, ensuring flight safety, and achieving efficient collaboration. To address these challenges, this study formulates the cooperative navigation problem as a Markov game and introduces a two-stream graph multi-agent proximal policy optimization (two-stream GMAPPO) algorithm based on the graph neural network (GNN). We model UAVs and other entities as graph-structured data, aggregate the node information of these graph-structured data through the GNN module, and extract potential features related to UAVs. This allows UAVs to obtain richer local information. Through the two-stream network structure, the extracted potential features related to UAVs are combined with the state space of UAVs to enhance adaptability to environmental changes and improve navigation safety. Simulation results demonstrate that the proposed method significantly outperforms baseline algorithms in both mean reward and convergence speed, confirming the superior performance of two-stream GMAPPO.
{"title":"A Learning-Based Cooperative Navigation Approach for Multi-UAV Systems Under Communication Coverage","authors":"Di Wu;Zhuang Cao;Xudong Lin;Feng Shu;Zikai Feng","doi":"10.1109/TNSE.2024.3517872","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3517872","url":null,"abstract":"In multi-unmanned aerial vehicle (UAV) systems cooperative navigation under the communication coverage of ground base stations, UAVs encounter challenges in maintaining reliable communication, ensuring flight safety, and achieving efficient collaboration. To address these challenges, this study formulates the cooperative navigation problem as a Markov game and introduces a two-stream graph multi-agent proximal policy optimization (two-stream GMAPPO) algorithm based on the graph neural network (GNN). We model UAVs and other entities as graph-structured data, aggregate the node information of these graph-structured data through the GNN module, and extract potential features related to UAVs. This allows UAVs to obtain richer local information. Through the two-stream network structure, the extracted potential features related to UAVs are combined with the state space of UAVs to enhance adaptability to environmental changes and improve navigation safety. Simulation results demonstrate that the proposed method significantly outperforms baseline algorithms in both mean reward and convergence speed, confirming the superior performance of two-stream GMAPPO.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"763-773"},"PeriodicalIF":6.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of the next generation ubiquitous network has put forward higher requirements for the connection density of communication devices, which has led to a lot of research on link management. However, with the expansion of network scale, the weaknesses of the existing algorithms in computing efficiency, performance, and realizability have become prominent. The emerging graph neural network (GNN) provides a new way to solve this problem. In order to make full use of the broadcast feature of wireless communication, we design a cross-domain distributed GNN structure (named as synchronous message passing neural network (SynMPNN)) combining the measurable index of the actual scene with message passing mechanism. This new GNN structure and the additional input feature dimension (i.e., SINR) work together to provide more comprehensive information for network training. After the initial deployment of the power decision from SynMPNN, we select some links to shut down and others to reduce their transmit power to further improve the system performance and save energy. Simulation results show that our proposed method under distributed execution conditions reaches 83.1% performance of the centralized method. In addition, the discussion on scalability suggests that in order to save training cost, small-scale scenes with the same density can be selected for training in the application of large-scale scenes.
{"title":"A Scalable Distributed Link Management Method for Massive IoT With Synchronous Message Passing Neural Network","authors":"Haosong Gou;Pengfei Du;Xidian Wang;Gaoyi Zhang;Daosen Zhai","doi":"10.1109/TNSE.2024.3517662","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3517662","url":null,"abstract":"The development of the next generation ubiquitous network has put forward higher requirements for the connection density of communication devices, which has led to a lot of research on link management. However, with the expansion of network scale, the weaknesses of the existing algorithms in computing efficiency, performance, and realizability have become prominent. The emerging graph neural network (GNN) provides a new way to solve this problem. In order to make full use of the broadcast feature of wireless communication, we design a cross-domain distributed GNN structure (named as synchronous message passing neural network (SynMPNN)) combining the measurable index of the actual scene with message passing mechanism. This new GNN structure and the additional input feature dimension (i.e., SINR) work together to provide more comprehensive information for network training. After the initial deployment of the power decision from SynMPNN, we select some links to shut down and others to reduce their transmit power to further improve the system performance and save energy. Simulation results show that our proposed method under distributed execution conditions reaches 83.1% performance of the centralized method. In addition, the discussion on scalability suggests that in order to save training cost, small-scale scenes with the same density can be selected for training in the application of large-scale scenes.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"750-762"},"PeriodicalIF":6.7,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reinforcement Learning (RL) algorithms have been increasingly applied to tackle the complex challenges of offloading in vehicular ad hoc networks (VANETs), particularly in high-density and high-mobility scenarios where network congestion leads to significant latency issues. These challenges are further exacerbated by the introduction of low-latency applications, such as high-definition (HD) Maps, which are compromised in the current IEEE 802.11p standard due to their low-priority classification. In our previous work, we developed a novel coverage-aware Q-learning algorithm using a single-agent approach to address these concerns. However, a key question remains: how does this solution perform when scaled to a larger, more complex environment using a multi-agent system? To address this, our current study evaluates the scalability and effectiveness of the previously developed single-agent Q-learning solution within a distributed multi-agent environment. This multi-agent approach is designed to enhance network performance by leveraging a smaller state and action space across multiple agents. We conduct extensive evaluations through various test cases, considering factors such as reward functions for individual and overall network performance, the number of agents, and comparisons between centralized and distributed learning. The experimental results show that our proposed multi-agent solution significantly reduces time latency in voice, video, HD Map, and best-effort cases by 40.4%, 36%, 43%, and 12%, respectively, compared to the single-agent approach. These findings demonstrate the potential of our solution to effectively manage the challenges of VANETs in dynamic and large-scale environments.
{"title":"Multi-Agent Assessment With QoS Enhancement for HD Map Updates in a Vehicular Network and Multi-Service Environment","authors":"Jeffrey Redondo;Nauman Aslam;Juan Zhang;Zhenhui Yuan","doi":"10.1109/TNSE.2024.3514744","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3514744","url":null,"abstract":"Reinforcement Learning (RL) algorithms have been increasingly applied to tackle the complex challenges of offloading in vehicular ad hoc networks (VANETs), particularly in high-density and high-mobility scenarios where network congestion leads to significant latency issues. These challenges are further exacerbated by the introduction of low-latency applications, such as high-definition (HD) Maps, which are compromised in the current IEEE 802.11p standard due to their low-priority classification. In our previous work, we developed a novel coverage-aware Q-learning algorithm using a single-agent approach to address these concerns. However, a key question remains: how does this solution perform when scaled to a larger, more complex environment using a multi-agent system? To address this, our current study evaluates the scalability and effectiveness of the previously developed single-agent Q-learning solution within a distributed multi-agent environment. This multi-agent approach is designed to enhance network performance by leveraging a smaller state and action space across multiple agents. We conduct extensive evaluations through various test cases, considering factors such as reward functions for individual and overall network performance, the number of agents, and comparisons between centralized and distributed learning. The experimental results show that our proposed multi-agent solution significantly reduces time latency in voice, video, HD Map, and best-effort cases by 40.4%, 36%, 43%, and 12%, respectively, compared to the single-agent approach. These findings demonstrate the potential of our solution to effectively manage the challenges of VANETs in dynamic and large-scale environments.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"738-749"},"PeriodicalIF":6.7,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1109/TNSE.2024.3514171
Jackson Cates;Randy C. Hoover;Kyle Caudle;David J. Marchette
Recently, there has been a growing demand for advances in representation learning for graphs. The literature has developed methods to represent nodes in an embedding space, allowing for classical techniques to perform node classification and prediction. One such method is the graph convolutional neural network that aggregates the node neighbor's features to create the embedding. In this method, the embedding contains local information about an individual's connections but lacks the global community dynamics about that individual. We propose a method that leverages both local and global information, offering significant advancements in the analysis of social networks. We first represent information across the entire hierarchy of the network by allowing the graph convolutional network to skip neighbors in its convolutions. We propose 3 methods of skipping that leverage matrix-powers of the adjacency matrix and a breadth-first search traversal. Once convolutions are performed, we capture correlations across the hierarchies by constructing our convolutions into a tensor (e.g., multi-way array), enabling a more holistic understanding of individual nodes' roles within their communities. We present experimental results for the proposed method and compare/contrast with other state-of-the-art methods in benchmark social network datasets for node classification and link prediction tasks. Ultimately, the proposed method not only advances the field of graph representation learning but also demonstrates improved performance across various complex social networks.
{"title":"TSGCN: A Framework for Hierarchical Graph Representation Learning","authors":"Jackson Cates;Randy C. Hoover;Kyle Caudle;David J. Marchette","doi":"10.1109/TNSE.2024.3514171","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3514171","url":null,"abstract":"Recently, there has been a growing demand for advances in representation learning for graphs. The literature has developed methods to represent nodes in an embedding space, allowing for classical techniques to perform node classification and prediction. One such method is the graph convolutional neural network that aggregates the node neighbor's features to create the embedding. In this method, the embedding contains local information about an individual's connections but lacks the global community dynamics about that individual. We propose a method that leverages both local and global information, offering significant advancements in the analysis of social networks. We first represent information across the entire hierarchy of the network by allowing the graph convolutional network to skip neighbors in its convolutions. We propose 3 methods of skipping that leverage matrix-powers of the adjacency matrix and a breadth-first search traversal. Once convolutions are performed, we capture correlations across the hierarchies by constructing our convolutions into a tensor (e.g., multi-way array), enabling a more holistic understanding of individual nodes' roles within their communities. We present experimental results for the proposed method and compare/contrast with other state-of-the-art methods in benchmark social network datasets for node classification and link prediction tasks. Ultimately, the proposed method not only advances the field of graph representation learning but also demonstrates improved performance across various complex social networks.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"727-737"},"PeriodicalIF":6.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uncertainties of distributed renewable energy and load demand induced by meteorological factors pose a significant challenge to the voltage quality of the distribution network. This paper addresses this issue from a cyber-physical perspective, by proposing a novel voltage regulation (VR) service pricing for the distribution network. Specifically, an improved nonparametric kernel density estimation method characterized by adaptive variable bandwidth is proposed to measure the coupling among different meteorological factors. The intervals of photovoltaic power and air-conditioning load are defined by a novel affine algorithm with high performance, integrated with mandatory boundary and space approximation techniques. The distribution-level VR market is based on an affiliated layered communication architecture characterized by cloud-edge-terminal collaboration and message queue telemetry transport protocol. A grid-aware voltage regulation interval optimization model is proposed to determine the voltage regulation service price through an electrical distance-based rule. Case studies show that the proposed price can significantly facilitate robust VR decisions, promote the voltage-friendly behavior of VR service providers, and reduce the VR cost by about 20.96% compared to the currently adopted pricing mechanisms.
{"title":"Voltage Regulation Service Pricing in Cyber-Physical Distribution Networks With Multi-Dimensional Meteorological Uncertainties","authors":"Zhaobin Wei;Zhenyu Huang;Zhiyuan Tang;Huiming Chen;Xianwang Zuo;Haotang Li;Haoqiang Liu;Jichun Liu;Alberto Borghetti","doi":"10.1109/TNSE.2024.3512580","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3512580","url":null,"abstract":"Uncertainties of distributed renewable energy and load demand induced by meteorological factors pose a significant challenge to the voltage quality of the distribution network. This paper addresses this issue from a cyber-physical perspective, by proposing a novel voltage regulation (VR) service pricing for the distribution network. Specifically, an improved nonparametric kernel density estimation method characterized by adaptive variable bandwidth is proposed to measure the coupling among different meteorological factors. The intervals of photovoltaic power and air-conditioning load are defined by a novel affine algorithm with high performance, integrated with mandatory boundary and space approximation techniques. The distribution-level VR market is based on an affiliated layered communication architecture characterized by cloud-edge-terminal collaboration and message queue telemetry transport protocol. A grid-aware voltage regulation interval optimization model is proposed to determine the voltage regulation service price through an electrical distance-based rule. Case studies show that the proposed price can significantly facilitate robust VR decisions, promote the voltage-friendly behavior of VR service providers, and reduce the VR cost by about 20.96% compared to the currently adopted pricing mechanisms.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"710-726"},"PeriodicalIF":6.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-09DOI: 10.1109/TNSE.2024.3513456
Xiaoyu Zhang;Wenchuan Yang;Jiawei Feng;Bitao Dai;Tianci Bu;Xin Lu
Identifying structures in common forms the basis for networked systems design and optimization. However, real structures represented by graphs are often of varying sizes, leading to the low accuracy of traditional graph classification methods. These graphs are called cross-scale graphs. To overcome this limitation, in this study, we propose GSpect, an advanced spectral graph filtering model for cross-scale graph classification tasks. Compared with other methods, we use graph wavelet neural networks for the convolution layer of the model, which aggregates multi-scale messages to generate graph representations. We design a spectral-pooling layer which aggregates nodes to one node to reduce the cross-scale graphs to the same size. We collect and construct the cross-scale benchmark data set, MSG (Multi Scale Graphs). Experiments reveal that, on open data sets, GSpect improves the performance of classification accuracy by 1.62% on average, and for a maximum of 3.33% on PROTEINS. On MSG, GSpect improves the performance of classification accuracy by 13.38% on average. GSpect fills the gap in cross-scale graph classification studies and has potential to provide assistance in application research like diagnosis of brain disease by predicting the brain network's label and developing new drugs with molecular structures learned from their counterparts in other systems.
{"title":"GSpect: Spectral Filtering for Cross-Scale Graph Classification","authors":"Xiaoyu Zhang;Wenchuan Yang;Jiawei Feng;Bitao Dai;Tianci Bu;Xin Lu","doi":"10.1109/TNSE.2024.3513456","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3513456","url":null,"abstract":"Identifying structures in common forms the basis for networked systems design and optimization. However, real structures represented by graphs are often of varying sizes, leading to the low accuracy of traditional graph classification methods. These graphs are called cross-scale graphs. To overcome this limitation, in this study, we propose GSpect, an advanced spectral graph filtering model for cross-scale graph classification tasks. Compared with other methods, we use graph wavelet neural networks for the convolution layer of the model, which aggregates multi-scale messages to generate graph representations. We design a spectral-pooling layer which aggregates nodes to one node to reduce the cross-scale graphs to the same size. We collect and construct the cross-scale benchmark data set, MSG (Multi Scale Graphs). Experiments reveal that, on open data sets, GSpect improves the performance of classification accuracy by 1.62% on average, and for a maximum of 3.33% on PROTEINS. On MSG, GSpect improves the performance of classification accuracy by 13.38% on average. GSpect fills the gap in cross-scale graph classification studies and has potential to provide assistance in application research like diagnosis of brain disease by predicting the brain network's label and developing new drugs with molecular structures learned from their counterparts in other systems.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 1","pages":"547-558"},"PeriodicalIF":6.7,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-03DOI: 10.1109/TNSE.2024.3505986
Kai Peng;Tongxin Liao;Xi Liao;Jiangshan Xie;Bo Xu;Tianping Deng;Menglan Hu
As blockchain technology becomes widely adopted in Mobile Internet of Things (MIoT) networks, the growing volume of blockchain data significantly increases storage pressure on peer nodes. Collaborative storage, which distributes blockchain data across nodes in cluster, offers a promising solution. However, the frequent movement of mobile nodes disrupts cluster structures, and existing static solutions fail to address this dynamic nature, rendering them ineffective. To address this issue, we propose a Dynamic Cluster-based Mobile Node Migration Scheme (DCMM), comprising two key components: new cluster selection and block redistribution. The Dynamic Node Synchronization Algorithm (DNSA) optimizes cluster selection, and the Dynamic Block Allocation Algorithm (DBAA) manages efficient block redistribution. Comparative analysis with five baseline approaches shows that DCMM improves performance by over 16.69% in the weighted optimization objective, which considers access costs, migration costs, and dwell times. These results demonstrate that our approach significantly optimizes network costs compared to baseline algorithm.
{"title":"DCMM: Dynamic Cluster-Based Mobile Node Migration Scheme for Blockchain Collaborative Storage in Mobile IoT Networks","authors":"Kai Peng;Tongxin Liao;Xi Liao;Jiangshan Xie;Bo Xu;Tianping Deng;Menglan Hu","doi":"10.1109/TNSE.2024.3505986","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3505986","url":null,"abstract":"As blockchain technology becomes widely adopted in Mobile Internet of Things (MIoT) networks, the growing volume of blockchain data significantly increases storage pressure on peer nodes. Collaborative storage, which distributes blockchain data across nodes in cluster, offers a promising solution. However, the frequent movement of mobile nodes disrupts cluster structures, and existing static solutions fail to address this dynamic nature, rendering them ineffective. To address this issue, we propose a Dynamic Cluster-based Mobile Node Migration Scheme (DCMM), comprising two key components: new cluster selection and block redistribution. The Dynamic Node Synchronization Algorithm (DNSA) optimizes cluster selection, and the Dynamic Block Allocation Algorithm (DBAA) manages efficient block redistribution. Comparative analysis with five baseline approaches shows that DCMM improves performance by over 16.69% in the weighted optimization objective, which considers access costs, migration costs, and dwell times. These results demonstrate that our approach significantly optimizes network costs compared to baseline algorithm.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"584-598"},"PeriodicalIF":6.7,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}