{"title":"Multimodal Graph Meta Contrastive Learning","authors":"Feng Zhao, Donglin Wang","doi":"10.1145/3459637.3482151","DOIUrl":null,"url":null,"abstract":"In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.","PeriodicalId":405296,"journal":{"name":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","volume":"427 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Information & Knowledge Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3459637.3482151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
In recent years, graph contrastive learning has achieved promising node classification accuracy using graph neural networks (GNNs), which can learn representations in an unsupervised manner. However, such representations cannot be generalized to unseen novel classes with only few-shot labeled samples in spite of exhibiting good performance on seen classes. In order to assign generalization capability to graph contrastive learning, we propose multimodal graph meta contrastive learning (MGMC) in this paper, which integrates multimodal meta learning into graph contrastive learning. On one hand, MGMC accomplishes effectively fast adapation on unseen novel classes by the aid of bilevel meta optimization to solve few-shot problems. On the other hand, MGMC can generalize quickly to a generic dataset with multimodal distribution by inducing the FiLM-based modulation module. In addition, MGMC incorporates the lastest graph contrastive learning method that does not rely on the onstruction of augmentations and negative examples. To our best knowledge, this is the first work to investigate graph contrastive learning for few-shot problems. Extensieve experimental results on three graph-structure datasets demonstrate the effectiveness of our proposed MGMC in few-shot node classification tasks.