Md. Farhadul Islam, Sarah Zabeen, Fardin Bin Rahman, Md. Azharul Islam, Fahmid Bin Kibria, Meem Arafat Manab, Dewan Ziaul Karim, Annajiat Alim Rasel
{"title":"Exploring Node Classification Uncertainty in Graph Neural Networks","authors":"Md. Farhadul Islam, Sarah Zabeen, Fardin Bin Rahman, Md. Azharul Islam, Fahmid Bin Kibria, Meem Arafat Manab, Dewan Ziaul Karim, Annajiat Alim Rasel","doi":"10.1145/3564746.3587019","DOIUrl":null,"url":null,"abstract":"In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.","PeriodicalId":322431,"journal":{"name":"Proceedings of the 2023 ACM Southeast Conference","volume":"13 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2023 ACM Southeast Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564746.3587019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In order to represent and investigate interconnected data, Graph Neural Networks (GNN) offer a robust framework that deftly combines Graph theory with Machine learning. Most of the studies focus on performance but uncertainty measurement does not get enough attention. In this study, we measure the predictive uncertainty of several GNN models, to show how high performance does not ensure reliable performance. We use dropouts during the inference phase to quantify the uncertainty of these transformer models. This method, often known as Monte Carlo Dropout (MCD), is an effective low-complexity approximation for calculating uncertainty. Benchmark dataset was used with five GNN models: Graph Convolutional Network (GCN), Graph Attention Network (GAT), Personalized Propagation of Neural Predictions (PPNP), PPNP's fast approximation (APPNP) and GraphSAGE in our investigation. GAT proved to be superior to all the other models in terms of accuracy and uncertainty both in node classification. Among the other models, some that fared better in accuracy fell behind when compared using classification uncertainty.