{"title":"图形对比学习的自适应负表征","authors":"Qi Zhang, Cheng Yang, Chuan Shi","doi":"10.1016/j.aiopen.2023.10.005","DOIUrl":null,"url":null,"abstract":"<div><p>Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 79-86"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651023000219/pdfft?md5=b0c3c461206c9fd2fcce93a0a80db1a1&pid=1-s2.0-S2666651023000219-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Adaptive negative representations for graph contrastive learning\",\"authors\":\"Qi Zhang, Cheng Yang, Chuan Shi\",\"doi\":\"10.1016/j.aiopen.2023.10.005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.</p></div>\",\"PeriodicalId\":100068,\"journal\":{\"name\":\"AI Open\",\"volume\":\"5 \",\"pages\":\"Pages 79-86\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S2666651023000219/pdfft?md5=b0c3c461206c9fd2fcce93a0a80db1a1&pid=1-s2.0-S2666651023000219-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI Open\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666651023000219\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Open","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666651023000219","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive negative representations for graph contrastive learning
Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.