In the rapidly advancing field of federated learning (FL), ensuring efficient FL task delegation while incentivizing FL client participation poses significant challenges, especially in wireless networks where FL participants' coverage is limited. Existing Contract Theory-based methods are designed under the assumption that there is only one FL server in the system (i.e., the monopoly market assumption), which in unrealistic in practice. To address this limitation, we propose Fairness-Aware Multi-Server FL task delegation approach (FAMuS), a novel framework based on Contract Theory and Lyapunov optimization to jointly address these intricate issues facing wireless multi-server FL networks (WMSFLN). Within a given WMSFLN, a task requester products multiple FL tasks and delegate them to FL servers which coordinate the training processes. To ensure fair treatment of FL servers, FAMuS establishes virtual queues to track their previous access to FL tasks, updating them in relation to the resulting FL model performance. The objective is to minimize the time-averaged cost in a WMSFLN, while ensuring all queues remain stable. This is particularly challenging given the incomplete information regarding FL clients' participation cost and the unpredictable nature of the WMSFLN state, which depends on the locations of the mobile clients. Extensive experiments comparing FAMuS against five state-of-the-art approaches based on two real-world datasets demonstrate that it achieves 6.91% higher test accuracy, 27.34% lower cost, and 0.63% higher fairness on average than the best-performing baseline.
{"title":"Fairness-Aware Multi-Server Federated Learning Task Delegation Over Wireless Networks","authors":"Yulan Gao;Chao Ren;Han Yu;Ming Xiao;Mikael Skoglund","doi":"10.1109/TNSE.2024.3508594","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3508594","url":null,"abstract":"In the rapidly advancing field of federated learning (FL), ensuring efficient FL task delegation while incentivizing FL client participation poses significant challenges, especially in wireless networks where FL participants' coverage is limited. Existing Contract Theory-based methods are designed under the assumption that there is only one FL server in the system (i.e., the monopoly market assumption), which in unrealistic in practice. To address this limitation, we propose <underline>F</u>airness-<underline>A</u>ware <underline>Mu</u>lti-<underline>S</u>erver FL task delegation approach (<monospace>FAMuS</monospace>), a novel framework based on Contract Theory and Lyapunov optimization to jointly address these intricate issues facing wireless multi-server FL networks (WMSFLN). Within a given WMSFLN, a task requester products multiple FL tasks and delegate them to FL servers which coordinate the training processes. To ensure fair treatment of FL servers, <monospace>FAMuS</monospace> establishes virtual queues to track their previous access to FL tasks, updating them in relation to the resulting FL model performance. The objective is to minimize the time-averaged cost in a WMSFLN, while ensuring all queues remain stable. This is particularly challenging given the incomplete information regarding FL clients' participation cost and the unpredictable nature of the WMSFLN state, which depends on the locations of the mobile clients. Extensive experiments comparing <monospace>FAMuS</monospace> against five state-of-the-art approaches based on two real-world datasets demonstrate that it achieves 6.91% higher test accuracy, 27.34% lower cost, and 0.63% higher fairness on average than the best-performing baseline.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"684-697"},"PeriodicalIF":6.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The secure sharing of data is crucial for peer-to-peer energy trading. However, the vulnerability of Information and Communication Technology (ICT) infrastructures to cyberattacks, e.g., Denial of Service (DoS) attacks, poses a significant challenge. A possible solution is to use Digital Twin (DT) modeling of the physical system, which provides robust digital mapping and Big Data processing capabilities that facilitate data recovery. To this end, this paper proposes a DT-enabled energy trading framework for cyber-physical energy systems that offers data analytic and recovery capabilities to defend from DoS attacks. With this framework, a new distributed approximate-newton trading algorithm with a switched triggering control strategy is proposed. Therein, the DT model is employed to achieve data recovery and adjust the system evolution of trading trajectory during attack periods. This enables the proposed method to find optimal trading solutions even in the presence of DoS attacks. Theoretical analysis results demonstrate the correctness of the proposed method. Furthermore, numerical simulations are conducted to assess the effectiveness of the proposed method.
{"title":"Digital Twin for Secure Peer-to-Peer Trading in Cyber-Physical Energy Systems","authors":"Yushuai Li;Peiyuan Guan;Tianyi Li;Kim Guldstrand Larsen;Marco Aiello;Torben Bach Pedersen;Tingwen Huang;Yan Zhang","doi":"10.1109/TNSE.2024.3507956","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3507956","url":null,"abstract":"The secure sharing of data is crucial for peer-to-peer energy trading. However, the vulnerability of Information and Communication Technology (ICT) infrastructures to cyberattacks, e.g., Denial of Service (DoS) attacks, poses a significant challenge. A possible solution is to use Digital Twin (DT) modeling of the physical system, which provides robust digital mapping and Big Data processing capabilities that facilitate data recovery. To this end, this paper proposes a DT-enabled energy trading framework for cyber-physical energy systems that offers data analytic and recovery capabilities to defend from DoS attacks. With this framework, a new distributed approximate-newton trading algorithm with a switched triggering control strategy is proposed. Therein, the DT model is employed to achieve data recovery and adjust the system evolution of trading trajectory during attack periods. This enables the proposed method to find optimal trading solutions even in the presence of DoS attacks. Theoretical analysis results demonstrate the correctness of the proposed method. Furthermore, numerical simulations are conducted to assess the effectiveness of the proposed method.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"669-683"},"PeriodicalIF":6.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1109/TNSE.2024.3508847
Zhixin Liu;Jiawei Su;Jianshuai Wei;Wenxuan Chen;Kit Yan Chan;Yazhou Yuan;Xinping Guan
Leveraging the abundance of computational resources, the cloud-edge collaborative architecture provide stronger data processing capabilities for vehicular networks, which not only significantly enhances the timeliness of offloading operations for delay-sensitive tasks but also substantially mitigates resource expenditure associated with non-delay-sensitive tasks. Addressing the communication scenarios characterized by diverse task types, this paper introduces cloud-assisted mobile-edge computing (C-MEC) networks, underscored by a novel optimization scheme. The scheme incorporates a utility function that is correlated with offloading delays during the transmission and computation phases, effectively balancing resource allocations and enhancing the operational efficiency of vehicular networks. However, the mobility of vehicles introduces channel uncertainty, adversely affecting the offloading stability of C-MEC networks. To develop a practical channel model, a first-order Markov process is employed, taking into account vehicular mobility. Additionally, probability constraints regarding co-channel interference are imposed on signal links to ensure the offloading quality. The Bernstein approximation method is utilized to transform the original interference constraints into a tractable form, and the Successive Convex Approximation (SCA) technique is meticulously applied to address the non-convex robust optimization problem. Furthermore, this paper proposes a robust iterative algorithm to ascertain optimal power control and task scheduling strategies. Numerical simulations are conducted to assess the effective of the proposed algorithm against benchmark methods, with a particular focus on robustness in task offloading and utility in resource allocation.
{"title":"Joint Robust Power Control and Task Scheduling for Vehicular Offloading in Cloud-Assisted MEC Networks","authors":"Zhixin Liu;Jiawei Su;Jianshuai Wei;Wenxuan Chen;Kit Yan Chan;Yazhou Yuan;Xinping Guan","doi":"10.1109/TNSE.2024.3508847","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3508847","url":null,"abstract":"Leveraging the abundance of computational resources, the cloud-edge collaborative architecture provide stronger data processing capabilities for vehicular networks, which not only significantly enhances the timeliness of offloading operations for delay-sensitive tasks but also substantially mitigates resource expenditure associated with non-delay-sensitive tasks. Addressing the communication scenarios characterized by diverse task types, this paper introduces cloud-assisted mobile-edge computing (C-MEC) networks, underscored by a novel optimization scheme. The scheme incorporates a utility function that is correlated with offloading delays during the transmission and computation phases, effectively balancing resource allocations and enhancing the operational efficiency of vehicular networks. However, the mobility of vehicles introduces channel uncertainty, adversely affecting the offloading stability of C-MEC networks. To develop a practical channel model, a first-order Markov process is employed, taking into account vehicular mobility. Additionally, probability constraints regarding co-channel interference are imposed on signal links to ensure the offloading quality. The Bernstein approximation method is utilized to transform the original interference constraints into a tractable form, and the Successive Convex Approximation (SCA) technique is meticulously applied to address the non-convex robust optimization problem. Furthermore, this paper proposes a robust iterative algorithm to ascertain optimal power control and task scheduling strategies. Numerical simulations are conducted to assess the effective of the proposed algorithm against benchmark methods, with a particular focus on robustness in task offloading and utility in resource allocation.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"698-709"},"PeriodicalIF":6.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1109/TNSE.2024.3498942
Qihui Zhu;Shenwen Chen;Jingbin Zhang;Gang Yan;Wenbo Du
Controlling the dynamics of complex networks with only a few driver nodes is a significant objective for system control. However, the energy required for control becomes prohibitively large when the fraction of driver nodes is small. Previous methods to reduce control energy have mainly focused on increasing the number or altering the placement of driver nodes. In this paper, a novel approach is proposed to reduce control energy by rewiring networks while keeping the number of driver nodes unchanged. We model network rewiring to an optimization problem and develop a memetic algorithm to solve it accurately and efficiently. Specifically, we introduce a connectivity-preserving crossover operator to avoid searching in invalid solution space and design a local search operator to accelerate the convergence of the algorithm according to the network heterogeneity. Experimental results on both synthetic networks and real networks demonstrate the effectiveness of the proposed approach. Notably, our findings reveal that networks with low control energy tend to exhibit a âcore-chainâ structure, where control nodes and high-weight edges form a densely connected core, while other nodes and edges form independent chains connected to the core's boundaries. We further analyze the statistical description and formation mechanism of this structure.
{"title":"Network Topology Optimization for Energy-Efficient Control","authors":"Qihui Zhu;Shenwen Chen;Jingbin Zhang;Gang Yan;Wenbo Du","doi":"10.1109/TNSE.2024.3498942","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3498942","url":null,"abstract":"Controlling the dynamics of complex networks with only a few driver nodes is a significant objective for system control. However, the energy required for control becomes prohibitively large when the fraction of driver nodes is small. Previous methods to reduce control energy have mainly focused on increasing the number or altering the placement of driver nodes. In this paper, a novel approach is proposed to reduce control energy by rewiring networks while keeping the number of driver nodes unchanged. We model network rewiring to an optimization problem and develop a memetic algorithm to solve it accurately and efficiently. Specifically, we introduce a connectivity-preserving crossover operator to avoid searching in invalid solution space and design a local search operator to accelerate the convergence of the algorithm according to the network heterogeneity. Experimental results on both synthetic networks and real networks demonstrate the effectiveness of the proposed approach. Notably, our findings reveal that networks with low control energy tend to exhibit a âcore-chainâ structure, where control nodes and high-weight edges form a densely connected core, while other nodes and edges form independent chains connected to the core's boundaries. We further analyze the statistical description and formation mechanism of this structure.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 1","pages":"423-432"},"PeriodicalIF":6.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-27DOI: 10.1109/TNSE.2024.3507545
Jelena Mišić;Vojislav B. Mišić;Xiaolin Chang
Both Proof of Stake (PoS) and Delegated Proof of Stake (DPoS) consensus schemes for permissioned blockchains incur the risk of centralization of voting power in the hands of a small number of wealthy voters. In this work, we present Qualified Proof of Stake (QPoS) scheme which alleviates centralization by rewarding truthful behavior of both voters and leaders, and penalizing their untruthful behavior. Leaders are elected according to the current stake which gives preference to more trustworthy nodes. Nodes with low stake at the end of a round which consists of multiple PBFT voting cycles are excluded from voting in subsequent rounds, while nodes with sufficient stake may leave the network temporarily without losing their stake. We consider multiple node classes with different voting behavior and model them using embedded Markov Chain which corresponds to Semi Markov Process (SMP) in order to determine system performance. Our results show the interaction of class populations, voting behavior, and mobility with round size, and show notable stake-based prioritization among the nodes for selection of PBFT leaders. Moreover, we show that higher proportion of well behaved nodes and shorter voting rounds are needed to achieve consensus with high probability.
{"title":"QPoS: Decentralized Stake-Based Leader and Voter Selection in a PBFT System With Mobile Voters","authors":"Jelena Mišić;Vojislav B. Mišić;Xiaolin Chang","doi":"10.1109/TNSE.2024.3507545","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3507545","url":null,"abstract":"Both Proof of Stake (PoS) and Delegated Proof of Stake (DPoS) consensus schemes for permissioned blockchains incur the risk of centralization of voting power in the hands of a small number of wealthy voters. In this work, we present Qualified Proof of Stake (QPoS) scheme which alleviates centralization by rewarding truthful behavior of both voters and leaders, and penalizing their untruthful behavior. Leaders are elected according to the current stake which gives preference to more trustworthy nodes. Nodes with low stake at the end of a round which consists of multiple PBFT voting cycles are excluded from voting in subsequent rounds, while nodes with sufficient stake may leave the network temporarily without losing their stake. We consider multiple node classes with different voting behavior and model them using embedded Markov Chain which corresponds to Semi Markov Process (SMP) in order to determine system performance. Our results show the interaction of class populations, voting behavior, and mobility with round size, and show notable stake-based prioritization among the nodes for selection of PBFT leaders. Moreover, we show that higher proportion of well behaved nodes and shorter voting rounds are needed to achieve consensus with high probability.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"653-668"},"PeriodicalIF":6.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Comprehensive network monitoring data is crucial for anomaly detection and network optimization tasks. However, due to factors such as sampling strategies and failures in data transmission or storage, only incomplete monitoring data can be obtained. Traditional techniques for completing network monitoring data matrices have limitations in leveraging network-related features and lack the adaptability required for offline and online execution. In this paper, we introduce a novel approach that significantly improves the integration of network features and operational flexibility in data completion tasks. By converting the data matrix into a bipartite graph and integrating network features into the graph's node attributes, we redefine the problem of missing data completion. This transformation reframes the issue as estimating unobserved edges in the bipartite graph. We propose the Bi-directional Bipartite Graph Completion (BGC) model, a flexible framework that seamlessly adapts to both offline and online data completion tasks. This model encapsulates static, dynamic, bi-directional temporal features and network topology, thereby improving the accuracy of unobserved edge estimation. Experiments conducted on two public data traces demonstrate the superiority of our method over six baseline models. Our method not only achieves higher accuracy in offline scenarios but also displays remarkable speed in online situations.
{"title":"Network Monitoring Data Recovery Based on Flexible Bi-Directional Model","authors":"Qixue Lin;Xiaocan Li;Kun Xie;Jigang Wen;Shiming He;Gaogang Xie;Xiaopeng Fan;Quan Feng","doi":"10.1109/TNSE.2024.3507078","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3507078","url":null,"abstract":"Comprehensive network monitoring data is crucial for anomaly detection and network optimization tasks. However, due to factors such as sampling strategies and failures in data transmission or storage, only incomplete monitoring data can be obtained. Traditional techniques for completing network monitoring data matrices have limitations in leveraging network-related features and lack the adaptability required for offline and online execution. In this paper, we introduce a novel approach that significantly improves the integration of network features and operational flexibility in data completion tasks. By converting the data matrix into a bipartite graph and integrating network features into the graph's node attributes, we redefine the problem of missing data completion. This transformation reframes the issue as estimating unobserved edges in the bipartite graph. We propose the Bi-directional Bipartite Graph Completion (BGC) model, a flexible framework that seamlessly adapts to both offline and online data completion tasks. This model encapsulates static, dynamic, bi-directional temporal features and network topology, thereby improving the accuracy of unobserved edge estimation. Experiments conducted on two public data traces demonstrate the superiority of our method over six baseline models. Our method not only achieves higher accuracy in offline scenarios but also displays remarkable speed in online situations.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"623-635"},"PeriodicalIF":6.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-27DOI: 10.1109/TNSE.2024.3507273
Yu Qiao;Chaoning Zhang;Apurba Adhikary;Choong Seon Hong
Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training in edge networks. However, challenges such as vulnerability to adversarial examples and non-independent and identically distributed (non-IID) data across devices hinder the deployment of adversarially robust and accurate models at the edge. While adversarial training (AT) is widely recognized as an effective defense strategy against adversarial attacks in centralized training, we shed light on the adverse effects of directly applying AT in FL, which can severely compromise accuracy under non-IID scenarios. To address this limitation, this paper proposes FatCC, which incorporates local logit Calibration and global feature Contrast into the vanilla federated adversarial training (Fat) process from both logit and feature perspectives. This approach effectively enhances the robust accuracy (RA) and clean accuracy (CA) of the federated system. First, we introduce logit calibration, where the logits are calibrated during local adversarial updates, thereby improving adversarial robustness. Second, FatCC incorporates feature contrast, which involves a global alignment term that aligns each local representation with corresponding unbiased global features, thus enhancing robustness and accuracy. Extensive experiments across multiple datasets demonstrate that FatCC achieves comparable or superior performance gains in both CA and RA compared to other baselines.
{"title":"Logit Calibration and Feature Contrast for Robust Federated Learning on Non-IID Data","authors":"Yu Qiao;Chaoning Zhang;Apurba Adhikary;Choong Seon Hong","doi":"10.1109/TNSE.2024.3507273","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3507273","url":null,"abstract":"Federated learning (FL) is a privacy-preserving distributed framework for collaborative model training in edge networks. However, challenges such as vulnerability to adversarial examples and non-independent and identically distributed (non-IID) data across devices hinder the deployment of adversarially robust and accurate models at the edge. While adversarial training (AT) is widely recognized as an effective defense strategy against adversarial attacks in centralized training, we shed light on the adverse effects of directly applying AT in FL, which can severely compromise accuracy under non-IID scenarios. To address this limitation, this paper proposes <underline>FatCC</u>, which incorporates local logit <underline>C</u>alibration and global feature <underline>C</u>ontrast into the vanilla federated adversarial training (<underline>Fat</u>) process from both logit and feature perspectives. This approach effectively enhances the robust accuracy (RA) and clean accuracy (CA) of the federated system. First, we introduce logit calibration, where the logits are calibrated during local adversarial updates, thereby improving adversarial robustness. Second, FatCC incorporates feature contrast, which involves a global alignment term that aligns each local representation with corresponding unbiased global features, thus enhancing robustness and accuracy. Extensive experiments across multiple datasets demonstrate that FatCC achieves comparable or superior performance gains in both CA and RA compared to other baselines.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"636-652"},"PeriodicalIF":6.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-27DOI: 10.1109/TNSE.2024.3506732
Yanan Zhu;Qinghai Li;Tao Li;Guanghui Wen
This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of $N$ convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by $N$ distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of ${mathcal O}(frac{ln (k)}{sqrt{k} })$ as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, ${mathcal O}(frac{1}{k})$. Finally, rigorous simulation results verify the correctness of theoretical findings.
{"title":"Distributed Randomized Gradient-Free Convex Optimization With Set Constraints Over Time-Varying Weight-Unbalanced Digraphs","authors":"Yanan Zhu;Qinghai Li;Tao Li;Guanghui Wen","doi":"10.1109/TNSE.2024.3506732","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3506732","url":null,"abstract":"This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of <inline-formula><tex-math>$N$</tex-math></inline-formula> convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by <inline-formula><tex-math>$N$</tex-math></inline-formula> distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of <inline-formula><tex-math>${mathcal O}(frac{ln (k)}{sqrt{k} })$</tex-math></inline-formula> as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, <inline-formula><tex-math>${mathcal O}(frac{1}{k})$</tex-math></inline-formula>. Finally, rigorous simulation results verify the correctness of theoretical findings.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"610-622"},"PeriodicalIF":6.7,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we investigate the distributed robust state estimation of non-Gaussian systems under unknown deception attacks with the imprecise constraint information. Leveraging the advantage of multi-kernel maximum correntropy criterion (MK-MCC) in non-Gaussian signal processing, a novel maximum-a-posterior like utility function (MAP-LUF) is designed inspired by the traditional 2-norm form cost function, where the inaccurate constraint information is taken into consideration. The direct solution of MAP-LUF gives rise to the centralized MK-MCC based state-constrained Kalman filter (C-MKMCSCKF) through fixed point iteration. Subsequently, the corresponding distributed algorithm is obtained by incorporating the consensus average in the computation of sum terms existing in the C-MKMCSCKF algorithm, which enables local information sharing to approximate the centralized estimation accuracy. Furthermore, we also establish the connection between the proposed centralized algorithm and the Banach theorem through dimension extension, and provide the convergence condition. The effectiveness of our proposed algorithms is validated through comparisons with related works in typical target tracking scenarios over sensor network.
{"title":"Distributed Multi-Kernel Maximum Correntropy State-Constrained Kalman Filter Under Deception Attacks","authors":"Guoqing Wang;Zhaolei Zhu;Chunyu Yang;Lei Ma;Wei Dai;Xinkai Chen","doi":"10.1109/TNSE.2024.3506553","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3506553","url":null,"abstract":"In this paper, we investigate the distributed robust state estimation of non-Gaussian systems under unknown deception attacks with the imprecise constraint information. Leveraging the advantage of multi-kernel maximum correntropy criterion (MK-MCC) in non-Gaussian signal processing, a novel maximum-a-posterior like utility function (MAP-LUF) is designed inspired by the traditional 2-norm form cost function, where the inaccurate constraint information is taken into consideration. The direct solution of MAP-LUF gives rise to the centralized MK-MCC based state-constrained Kalman filter (C-MKMCSCKF) through fixed point iteration. Subsequently, the corresponding distributed algorithm is obtained by incorporating the consensus average in the computation of sum terms existing in the C-MKMCSCKF algorithm, which enables local information sharing to approximate the centralized estimation accuracy. Furthermore, we also establish the connection between the proposed centralized algorithm and the Banach theorem through dimension extension, and provide the convergence condition. The effectiveness of our proposed algorithms is validated through comparisons with related works in typical target tracking scenarios over sensor network.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 1","pages":"533-546"},"PeriodicalIF":6.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142880332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data-driven methods based on the graph convolution architecture provide a promising direction for accelerating power flow (PF) calculation. These methods directly predict operational states of power systems according to given conditions, such as loads, states of buses, topology, etc. However, we find that the neighborhood aggregation of the graph convolution architecture violates operational constraints of power systems. In this paper, a global-receptive graph iteration architecture that overcomes this shortcoming is designed to replace the graph convolution architecture. Specifically, Newton's method, one of the most classical methods for PF, is embedded into the graph iteration network (GIN) to form an implicit residual learning architecture. To retain the interpretability, the GIN follows a non-activation paradigm, in which the ability of non-linear representation stems from the iterative architecture rather than the activation function. Finally, without the demand to reclaim global information, the GIN allows shallower network structure by eliminating fully connected layers. Extensive numerical experiments are conducted on IEEE 30-bus, 57-bus, 118-bus, and 300-bus power systems. The results validate the higher computational efficiency and the better prediction performance of the proposed method, compared with both classical approaches and precedent data-driven approaches.
{"title":"Graph Learning for Power Flow Analysis: A Global-Receptive Graph Iteration Network Method","authors":"Junyan Huang;Yuanzheng Li;Shangyang He;Guokai Hao;Chunjie Zhou;Zhigang Zeng","doi":"10.1109/TNSE.2024.3506012","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3506012","url":null,"abstract":"The data-driven methods based on the graph convolution architecture provide a promising direction for accelerating power flow (PF) calculation. These methods directly predict operational states of power systems according to given conditions, such as loads, states of buses, topology, etc. However, we find that the neighborhood aggregation of the graph convolution architecture violates operational constraints of power systems. In this paper, a global-receptive graph iteration architecture that overcomes this shortcoming is designed to replace the graph convolution architecture. Specifically, Newton's method, one of the most classical methods for PF, is embedded into the graph iteration network (GIN) to form an implicit residual learning architecture. To retain the interpretability, the GIN follows a non-activation paradigm, in which the ability of non-linear representation stems from the iterative architecture rather than the activation function. Finally, without the demand to reclaim global information, the GIN allows shallower network structure by eliminating fully connected layers. Extensive numerical experiments are conducted on IEEE 30-bus, 57-bus, 118-bus, and 300-bus power systems. The results validate the higher computational efficiency and the better prediction performance of the proposed method, compared with both classical approaches and precedent data-driven approaches.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"12 2","pages":"599-609"},"PeriodicalIF":6.7,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143464348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}