{"title":"Bridging distribution gaps: invariant pattern discovery for dynamic graph learning","authors":"Yucheng Jin, Maoyi Wang, Yun Xiong, Zhizhou Ren, Cuiying Huo, Feng Zhu, Jiawei Zhang, Guangzhong Wang, Haoran Chen","doi":"10.1007/s11280-024-01283-2","DOIUrl":null,"url":null,"abstract":"<p>Temporal graph networks (TGNs) have been proposed to facilitate learning on dynamic graphs which are composed of interaction events among nodes. However, existing TGNs suffer from poor generalization under distribution shifts that occur over time. It is vital to discover invariant patterns with stable predictive power across various distributions to improve the generalization ability. Invariant pattern discovery on dynamic graphs is non-trivial, as long-term history of interaction events is compressed into the memory by TGNs in an entangled way, making invariant pattern discovery difficult. Furthermore, TGNs process interaction events chronologically in batches to obtain up-to-date representations. Each batch consisting of chronologically-close events lacks diversity for identifying invariance under distribution shifts. To tackle these challenges, we propose a novel method called <span>Smile</span>, which stands for <u>S</u>tructural te<u>M</u>poral <u>I</u>nvariant <u>LE</u>arning. Specifically, we first propose the disentangled graph memory network, which selectively extracts pattern information from long-term history through the disentangled memory gating and attention network. The interaction history approximator is further introduced to provide diverse interaction distributions efficiently. <span>Smile</span> guarantees prediction stability under diverse temporal-dynamic distributions by regularizing invariance under cross-time distribution interventions. Experimental results on real-world datasets demonstrate that <span>Smile</span> outperforms baselines, yielding substantial performance improvements.</p>","PeriodicalId":501180,"journal":{"name":"World Wide Web","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"World Wide Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11280-024-01283-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Temporal graph networks (TGNs) have been proposed to facilitate learning on dynamic graphs which are composed of interaction events among nodes. However, existing TGNs suffer from poor generalization under distribution shifts that occur over time. It is vital to discover invariant patterns with stable predictive power across various distributions to improve the generalization ability. Invariant pattern discovery on dynamic graphs is non-trivial, as long-term history of interaction events is compressed into the memory by TGNs in an entangled way, making invariant pattern discovery difficult. Furthermore, TGNs process interaction events chronologically in batches to obtain up-to-date representations. Each batch consisting of chronologically-close events lacks diversity for identifying invariance under distribution shifts. To tackle these challenges, we propose a novel method called Smile, which stands for Structural teMporal Invariant LEarning. Specifically, we first propose the disentangled graph memory network, which selectively extracts pattern information from long-term history through the disentangled memory gating and attention network. The interaction history approximator is further introduced to provide diverse interaction distributions efficiently. Smile guarantees prediction stability under diverse temporal-dynamic distributions by regularizing invariance under cross-time distribution interventions. Experimental results on real-world datasets demonstrate that Smile outperforms baselines, yielding substantial performance improvements.