{"title":"Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory","authors":"Wenjie Yan;Ziqi Li;Yongjun Qi","doi":"10.23919/cje.2022.00.342","DOIUrl":null,"url":null,"abstract":"The robustness of graph neural networks (GNNs) is a critical research topic in deep learning. Many researchers have designed regularization methods to enhance the robustness of neural networks, but there is a lack of theoretical analysis on the principle of robustness. In order to tackle the weakness of current robustness designing methods, this paper gives new insights into how to guarantee the robustness of GNNs. A novel regularization strategy named Lya-Reg is designed to guarantee the robustness of GNNs by Lyapunov theory. Our results give new insights into how regularization can mitigate the various adversarial effects on different graph signals. Extensive experiments on various public datasets demonstrate that the proposed regularization method is more robust than the state-of-the-art methods such as \n<tex>$L_1$</tex>\n-norm, \n<tex>$L_2$</tex>\n-norm, \n<tex>$L_{21}$</tex>\n-norm, Pro-GNN, PA-GNN and GARNET against various types of graph adversarial attacks.","PeriodicalId":50701,"journal":{"name":"Chinese Journal of Electronics","volume":"33 3","pages":"732-741"},"PeriodicalIF":1.6000,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10543212","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chinese Journal of Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10543212/","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The robustness of graph neural networks (GNNs) is a critical research topic in deep learning. Many researchers have designed regularization methods to enhance the robustness of neural networks, but there is a lack of theoretical analysis on the principle of robustness. In order to tackle the weakness of current robustness designing methods, this paper gives new insights into how to guarantee the robustness of GNNs. A novel regularization strategy named Lya-Reg is designed to guarantee the robustness of GNNs by Lyapunov theory. Our results give new insights into how regularization can mitigate the various adversarial effects on different graph signals. Extensive experiments on various public datasets demonstrate that the proposed regularization method is more robust than the state-of-the-art methods such as
$L_1$
-norm,
$L_2$
-norm,
$L_{21}$
-norm, Pro-GNN, PA-GNN and GARNET against various types of graph adversarial attacks.
期刊介绍:
CJE focuses on the emerging fields of electronics, publishing innovative and transformative research papers. Most of the papers published in CJE are from universities and research institutes, presenting their innovative research results. Both theoretical and practical contributions are encouraged, and original research papers reporting novel solutions to the hot topics in electronics are strongly recommended.