用于时态网络动态链接预测的对比表征学习

Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati
{"title":"用于时态网络动态链接预测的对比表征学习","authors":"Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati","doi":"arxiv-2408.12753","DOIUrl":null,"url":null,"abstract":"Evolving networks are complex data structures that emerge in a wide range of\nsystems in science and engineering. Learning expressive representations for\nsuch networks that encode their structural connectivity and temporal evolution\nis essential for downstream data analytics and machine learning applications.\nIn this study, we introduce a self-supervised method for learning\nrepresentations of temporal networks and employ these representations in the\ndynamic link prediction task. While temporal networks are typically\ncharacterized as a sequence of interactions over the continuous time domain,\nour study focuses on their discrete-time versions. This enables us to balance\nthe trade-off between computational complexity and precise modeling of the\ninteractions. We propose a recurrent message-passing neural network\narchitecture for modeling the information flow over time-respecting paths of\ntemporal networks. The key feature of our method is the contrastive training\nobjective of the model, which is a combination of three loss functions: link\nprediction, graph reconstruction, and contrastive predictive coding losses. The\ncontrastive predictive coding objective is implemented using infoNCE losses at\nboth local and global scales of the input graphs. We empirically show that the\nadditional self-supervised losses enhance the training and improve the model's\nperformance in the dynamic link prediction task. The proposed method is tested\non Enron, COLAB, and Facebook datasets and exhibits superior results compared\nto existing models.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks\",\"authors\":\"Amirhossein Nouranizadeh, Fatemeh Tabatabaei Far, Mohammad Rahmati\",\"doi\":\"arxiv-2408.12753\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evolving networks are complex data structures that emerge in a wide range of\\nsystems in science and engineering. Learning expressive representations for\\nsuch networks that encode their structural connectivity and temporal evolution\\nis essential for downstream data analytics and machine learning applications.\\nIn this study, we introduce a self-supervised method for learning\\nrepresentations of temporal networks and employ these representations in the\\ndynamic link prediction task. While temporal networks are typically\\ncharacterized as a sequence of interactions over the continuous time domain,\\nour study focuses on their discrete-time versions. This enables us to balance\\nthe trade-off between computational complexity and precise modeling of the\\ninteractions. We propose a recurrent message-passing neural network\\narchitecture for modeling the information flow over time-respecting paths of\\ntemporal networks. The key feature of our method is the contrastive training\\nobjective of the model, which is a combination of three loss functions: link\\nprediction, graph reconstruction, and contrastive predictive coding losses. The\\ncontrastive predictive coding objective is implemented using infoNCE losses at\\nboth local and global scales of the input graphs. We empirically show that the\\nadditional self-supervised losses enhance the training and improve the model's\\nperformance in the dynamic link prediction task. The proposed method is tested\\non Enron, COLAB, and Facebook datasets and exhibits superior results compared\\nto existing models.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.12753\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.12753","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

演化网络是复杂的数据结构,出现在科学和工程领域的各种系统中。本研究中,我们介绍了一种学习时态网络表征的自监督方法,并在动态链接预测任务中使用了这些表征。时态网络通常被描述为连续时域上的一系列交互,而我们的研究则侧重于其离散时域版本。这使我们能够在计算复杂性和交互的精确建模之间取得平衡。我们提出了一种递归信息传递神经网络架构,用于模拟时空网络路径上的信息流。我们方法的主要特点是模型的对比训练目标(contrastive trainingobjective),它是三个损失函数的组合:链接预测、图重构和对比预测编码损失。对比预测编码目标是在输入图的局部和全局范围内使用 infoNCE 损失实现的。我们的经验表明,在动态链接预测任务中,附加的自监督损失增强了训练效果,并提高了模型的性能。我们在安然、COLAB 和 Facebook 数据集上对所提出的方法进行了测试,结果显示该方法优于现有模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Contrastive Representation Learning for Dynamic Link Prediction in Temporal Networks
Evolving networks are complex data structures that emerge in a wide range of systems in science and engineering. Learning expressive representations for such networks that encode their structural connectivity and temporal evolution is essential for downstream data analytics and machine learning applications. In this study, we introduce a self-supervised method for learning representations of temporal networks and employ these representations in the dynamic link prediction task. While temporal networks are typically characterized as a sequence of interactions over the continuous time domain, our study focuses on their discrete-time versions. This enables us to balance the trade-off between computational complexity and precise modeling of the interactions. We propose a recurrent message-passing neural network architecture for modeling the information flow over time-respecting paths of temporal networks. The key feature of our method is the contrastive training objective of the model, which is a combination of three loss functions: link prediction, graph reconstruction, and contrastive predictive coding losses. The contrastive predictive coding objective is implemented using infoNCE losses at both local and global scales of the input graphs. We empirically show that the additional self-supervised losses enhance the training and improve the model's performance in the dynamic link prediction task. The proposed method is tested on Enron, COLAB, and Facebook datasets and exhibits superior results compared to existing models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1