{"title":"Detecting Targets of Graph Adversarial Attacks With Edge and Feature Perturbations","authors":"Boyi Lee;Jhao-Yin Jhang;Lo-Yao Yeh;Ming-Yi Chang;Chia-Mei Chen;Chih-Ya Shen","doi":"10.1109/TCSS.2023.3344642","DOIUrl":null,"url":null,"abstract":"Graph neural networks (GNNs) enable many novel applications and achieve excellent performance. However, their performance may be significantly degraded by the graph adversarial attacks, which intentionally add small perturbations to the graph. Previous countermeasures usually handle such attacks by enhancing model robustness. However, robust models cannot identify the \n<italic>target nodes</i>\n of the adversarial attacks, and thus we are unable to pinpoint the weak spots and analyze the causes or the targets of the attacks. In this article, we study the important research problem to detect the \n<italic>target nodes</i>\n of graph adversarial attacks under the \n<italic>black-box detection</i>\n scenario, which is particularly challenging because our detection models do not have any knowledge about the attacker, while the attackers usually employ unnoticeability strategies to minimize the chance of being detected. To our best knowledge, this is the first work that aims at detecting the \n<italic>target nodes</i>\n of graph adversarial attacks under the \n<italic>black-box detector</i>\n scenario. We propose two detection models, named \n<italic>Det-H</i>\n and \n<italic>Det-RL</i>\n, which employ different techniques that effectively detect the target nodes under the black-box detection scenario against various graph adversarial attacks. To enhance the generalization of the proposed detectors, we further propose two novel surrogate attackers that are able to generate effective attack examples and camouflage their attack traces for training robust detectors. In addition, we propose three strategies to effectively improve the training efficiency. Experimental results on multiple datasets show that our proposed detectors significantly outperform the other baselines against multiple state-of-the-art graph adversarial attackers with various attack strategies. The proposed \n<italic>Det-RL</i>\n detector achieves an averaged area under curve (AUC) of \n<inline-formula><tex-math>$0.945$</tex-math></inline-formula>\n against all the attackers, and our efficiency-improving strategies are able save up to \n<inline-formula><tex-math>$91$</tex-math></inline-formula>\n% of the training time.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5000,"publicationDate":"2024-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Computational Social Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10414418/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
Graph neural networks (GNNs) enable many novel applications and achieve excellent performance. However, their performance may be significantly degraded by the graph adversarial attacks, which intentionally add small perturbations to the graph. Previous countermeasures usually handle such attacks by enhancing model robustness. However, robust models cannot identify the
target nodes
of the adversarial attacks, and thus we are unable to pinpoint the weak spots and analyze the causes or the targets of the attacks. In this article, we study the important research problem to detect the
target nodes
of graph adversarial attacks under the
black-box detection
scenario, which is particularly challenging because our detection models do not have any knowledge about the attacker, while the attackers usually employ unnoticeability strategies to minimize the chance of being detected. To our best knowledge, this is the first work that aims at detecting the
target nodes
of graph adversarial attacks under the
black-box detector
scenario. We propose two detection models, named
Det-H
and
Det-RL
, which employ different techniques that effectively detect the target nodes under the black-box detection scenario against various graph adversarial attacks. To enhance the generalization of the proposed detectors, we further propose two novel surrogate attackers that are able to generate effective attack examples and camouflage their attack traces for training robust detectors. In addition, we propose three strategies to effectively improve the training efficiency. Experimental results on multiple datasets show that our proposed detectors significantly outperform the other baselines against multiple state-of-the-art graph adversarial attackers with various attack strategies. The proposed
Det-RL
detector achieves an averaged area under curve (AUC) of
$0.945$
against all the attackers, and our efficiency-improving strategies are able save up to
$91$
% of the training time.
期刊介绍:
IEEE Transactions on Computational Social Systems focuses on such topics as modeling, simulation, analysis and understanding of social systems from the quantitative and/or computational perspective. "Systems" include man-man, man-machine and machine-machine organizations and adversarial situations as well as social media structures and their dynamics. More specifically, the proposed transactions publishes articles on modeling the dynamics of social systems, methodologies for incorporating and representing socio-cultural and behavioral aspects in computational modeling, analysis of social system behavior and structure, and paradigms for social systems modeling and simulation. The journal also features articles on social network dynamics, social intelligence and cognition, social systems design and architectures, socio-cultural modeling and representation, and computational behavior modeling, and their applications.