{"title":"GNN-Adv: Defence Strategy from Adversarial Attack for Graph Neural Network","authors":"Lilapati Waikhom, Ripon Patgiri","doi":"10.1109/SILCON55242.2022.10028958","DOIUrl":null,"url":null,"abstract":"Deep learning-based models have demonstrated exceptional performances in diverse fields. However, recent research has revealed that adversarial attacks and minor input perturbations may easily deceive DNNs. Graph Neural Networks (GNNs) inherit this weakness. An opponent can persuade GNNs to generate inaccurate predictions by influencing a few edges in the graph. It results in severe consequences of adopting GNNs in safety-critical applications. The research focus has shifted in recent years to make GNNs more robust to adversarial attacks. This article proposes GNN-Adv, a novel approach for defending against numerous attacks that disturb the graph structure during training. Experiments demonstrate that GNN-Adv surpasses current peer approaches by an average of 15 % across five GNN approaches, four datasets, and three defense techniques. Remarkably, GNNs-Adv can successfully restore their current performance in the face of terrifying, directly targeted attacks.","PeriodicalId":183947,"journal":{"name":"2022 IEEE Silchar Subsection Conference (SILCON)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Silchar Subsection Conference (SILCON)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SILCON55242.2022.10028958","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep learning-based models have demonstrated exceptional performances in diverse fields. However, recent research has revealed that adversarial attacks and minor input perturbations may easily deceive DNNs. Graph Neural Networks (GNNs) inherit this weakness. An opponent can persuade GNNs to generate inaccurate predictions by influencing a few edges in the graph. It results in severe consequences of adopting GNNs in safety-critical applications. The research focus has shifted in recent years to make GNNs more robust to adversarial attacks. This article proposes GNN-Adv, a novel approach for defending against numerous attacks that disturb the graph structure during training. Experiments demonstrate that GNN-Adv surpasses current peer approaches by an average of 15 % across five GNN approaches, four datasets, and three defense techniques. Remarkably, GNNs-Adv can successfully restore their current performance in the face of terrifying, directly targeted attacks.