{"title":"异构图神经网络的位置编码","authors":"Xi Zeng, Qingyun Dai, Fangyu Lei","doi":"10.1117/12.2639209","DOIUrl":null,"url":null,"abstract":"Many real-world networks are suitable to be modeled as heterogeneous graphs, which are made up of many sorts of nodes and links. When the heterogeneous map is a non-attribute graph or some features on the graph are missing, it will lead to poor performance of the previous models. In this paper, we hold that useful position features can be generated through the guidance of topological information on the graph and present a generic framework for Heterogeneous Graph Neural Networks(HGNNs), termed Position Encoding(PE). First of all, PE leverages existing node embedding methods to obtain the implicit semantics on a graph and generate low-dimensional node embedding. Secondly, for each task-related target node, PE generates corresponding sampling subgraphs, in which we use node embedding to calculate the relative positions and encode the positions into position features that can be used directly or as an additional feature. Then the set of subgraphs with position features can be easily combined with the desired Graph Neural Networks (GNNs) or HGNNs to learn the representation of target nodes. We evaluated our method on graph classification tasks over three commonly used heterogeneous graph datasets with two processing ways, and experimental results show the superiority of PE over baselines.","PeriodicalId":336892,"journal":{"name":"Neural Networks, Information and Communication Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Position encoding for heterogeneous graph neural networks\",\"authors\":\"Xi Zeng, Qingyun Dai, Fangyu Lei\",\"doi\":\"10.1117/12.2639209\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many real-world networks are suitable to be modeled as heterogeneous graphs, which are made up of many sorts of nodes and links. When the heterogeneous map is a non-attribute graph or some features on the graph are missing, it will lead to poor performance of the previous models. In this paper, we hold that useful position features can be generated through the guidance of topological information on the graph and present a generic framework for Heterogeneous Graph Neural Networks(HGNNs), termed Position Encoding(PE). First of all, PE leverages existing node embedding methods to obtain the implicit semantics on a graph and generate low-dimensional node embedding. Secondly, for each task-related target node, PE generates corresponding sampling subgraphs, in which we use node embedding to calculate the relative positions and encode the positions into position features that can be used directly or as an additional feature. Then the set of subgraphs with position features can be easily combined with the desired Graph Neural Networks (GNNs) or HGNNs to learn the representation of target nodes. We evaluated our method on graph classification tasks over three commonly used heterogeneous graph datasets with two processing ways, and experimental results show the superiority of PE over baselines.\",\"PeriodicalId\":336892,\"journal\":{\"name\":\"Neural Networks, Information and Communication Engineering\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks, Information and Communication Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/12.2639209\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks, Information and Communication Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2639209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Position encoding for heterogeneous graph neural networks
Many real-world networks are suitable to be modeled as heterogeneous graphs, which are made up of many sorts of nodes and links. When the heterogeneous map is a non-attribute graph or some features on the graph are missing, it will lead to poor performance of the previous models. In this paper, we hold that useful position features can be generated through the guidance of topological information on the graph and present a generic framework for Heterogeneous Graph Neural Networks(HGNNs), termed Position Encoding(PE). First of all, PE leverages existing node embedding methods to obtain the implicit semantics on a graph and generate low-dimensional node embedding. Secondly, for each task-related target node, PE generates corresponding sampling subgraphs, in which we use node embedding to calculate the relative positions and encode the positions into position features that can be used directly or as an additional feature. Then the set of subgraphs with position features can be easily combined with the desired Graph Neural Networks (GNNs) or HGNNs to learn the representation of target nodes. We evaluated our method on graph classification tasks over three commonly used heterogeneous graph datasets with two processing ways, and experimental results show the superiority of PE over baselines.