Leila Ben Saad, Nama Ajay Nagendra, B. Beferull-Lozano
{"title":"随机扰动和量化误差下的邻域图神经网络","authors":"Leila Ben Saad, Nama Ajay Nagendra, B. Beferull-Lozano","doi":"10.1109/spawc51304.2022.9834020","DOIUrl":null,"url":null,"abstract":"Graph convolutional neural networks (GCNNs) have emerged as a promising tool in the deep learning community to learn complex hidden relationships of data generated from non-Euclidean domains and represented as graphs. GCNNs are formed by a cascade of layers of graph filters, which replace the classical convolution operation in convolutional neural networks. These graph filters, when operated over real networks, can be subject to random perturbations due to link losses that can be caused by noise, interference and adversarial attacks. In addition, these graph filters are executed by finite-precision processors, which generate numerical quantization errors that may affect their performance. Despite the research works studying the effect of either graph perturbations or quantization in GCNNs, their robustness against both of these problems jointly is still not well investigated and understood. In this paper, we propose a quantized GCNN architecture based on neighborhood graph filters under random graph perturbations. We investigate the stability of such architecture to both random graph perturbations and quantization errors. We prove that the expected error due to quantization and random graph perturbations at the GCNN output is upper-bounded and we show how this bound can be controlled. Numerical experiments are conducted to corroborate our theoretical findings.","PeriodicalId":423807,"journal":{"name":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neighborhood Graph Neural Networks under Random Perturbations and Quantization Errors\",\"authors\":\"Leila Ben Saad, Nama Ajay Nagendra, B. Beferull-Lozano\",\"doi\":\"10.1109/spawc51304.2022.9834020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Graph convolutional neural networks (GCNNs) have emerged as a promising tool in the deep learning community to learn complex hidden relationships of data generated from non-Euclidean domains and represented as graphs. GCNNs are formed by a cascade of layers of graph filters, which replace the classical convolution operation in convolutional neural networks. These graph filters, when operated over real networks, can be subject to random perturbations due to link losses that can be caused by noise, interference and adversarial attacks. In addition, these graph filters are executed by finite-precision processors, which generate numerical quantization errors that may affect their performance. Despite the research works studying the effect of either graph perturbations or quantization in GCNNs, their robustness against both of these problems jointly is still not well investigated and understood. In this paper, we propose a quantized GCNN architecture based on neighborhood graph filters under random graph perturbations. We investigate the stability of such architecture to both random graph perturbations and quantization errors. We prove that the expected error due to quantization and random graph perturbations at the GCNN output is upper-bounded and we show how this bound can be controlled. Numerical experiments are conducted to corroborate our theoretical findings.\",\"PeriodicalId\":423807,\"journal\":{\"name\":\"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/spawc51304.2022.9834020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 23rd International Workshop on Signal Processing Advances in Wireless Communication (SPAWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/spawc51304.2022.9834020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neighborhood Graph Neural Networks under Random Perturbations and Quantization Errors
Graph convolutional neural networks (GCNNs) have emerged as a promising tool in the deep learning community to learn complex hidden relationships of data generated from non-Euclidean domains and represented as graphs. GCNNs are formed by a cascade of layers of graph filters, which replace the classical convolution operation in convolutional neural networks. These graph filters, when operated over real networks, can be subject to random perturbations due to link losses that can be caused by noise, interference and adversarial attacks. In addition, these graph filters are executed by finite-precision processors, which generate numerical quantization errors that may affect their performance. Despite the research works studying the effect of either graph perturbations or quantization in GCNNs, their robustness against both of these problems jointly is still not well investigated and understood. In this paper, we propose a quantized GCNN architecture based on neighborhood graph filters under random graph perturbations. We investigate the stability of such architecture to both random graph perturbations and quantization errors. We prove that the expected error due to quantization and random graph perturbations at the GCNN output is upper-bounded and we show how this bound can be controlled. Numerical experiments are conducted to corroborate our theoretical findings.