Learning Neural Point Processes with Latent Graphs

Qiang Zhang, Aldo Lipani, Emine Yilmaz
{"title":"Learning Neural Point Processes with Latent Graphs","authors":"Qiang Zhang, Aldo Lipani, Emine Yilmaz","doi":"10.1145/3442381.3450135","DOIUrl":null,"url":null,"abstract":"Neural point processes (NPPs) employ neural networks to capture complicated dynamics of asynchronous event sequences. Existing NPPs feed all history events into neural networks, assuming that all event types contribute to the prediction of the target type. However, this assumption can be problematic because in reality some event types do not contribute to the predictions of another type. To correct this defect, we learn to omit those types of events that do not contribute to the prediction of one target type during the formulation of NPPs. Towards this end, we simultaneously consider the tasks of (1) finding event types that contribute to predictions of the target types and (2) learning a NPP model from event sequences. For the former, we formulate a latent graph, with event types being vertices and non-zero contributing relationships being directed edges; then we propose a probabilistic graph generator, from which we sample a latent graph. For the latter, the sampled graph can be readily used as a plug-in to modify an existing NPP model. Because these two tasks are nested, we propose to optimize the model parameters through bilevel programming, and develop an efficient solution based on truncated gradient back-propagation. Experimental results on both synthetic and real-world datasets show the improved performance against state-of-the-art baselines. This work removes disturbance of non-contributing event types with the aid of a validation procedure, similar to the practice to mitigate overfitting used when training machine learning models.","PeriodicalId":106672,"journal":{"name":"Proceedings of the Web Conference 2021","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Web Conference 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3442381.3450135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 16

Abstract

Neural point processes (NPPs) employ neural networks to capture complicated dynamics of asynchronous event sequences. Existing NPPs feed all history events into neural networks, assuming that all event types contribute to the prediction of the target type. However, this assumption can be problematic because in reality some event types do not contribute to the predictions of another type. To correct this defect, we learn to omit those types of events that do not contribute to the prediction of one target type during the formulation of NPPs. Towards this end, we simultaneously consider the tasks of (1) finding event types that contribute to predictions of the target types and (2) learning a NPP model from event sequences. For the former, we formulate a latent graph, with event types being vertices and non-zero contributing relationships being directed edges; then we propose a probabilistic graph generator, from which we sample a latent graph. For the latter, the sampled graph can be readily used as a plug-in to modify an existing NPP model. Because these two tasks are nested, we propose to optimize the model parameters through bilevel programming, and develop an efficient solution based on truncated gradient back-propagation. Experimental results on both synthetic and real-world datasets show the improved performance against state-of-the-art baselines. This work removes disturbance of non-contributing event types with the aid of a validation procedure, similar to the practice to mitigate overfitting used when training machine learning models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用隐图学习神经点过程
神经点过程(NPPs)利用神经网络捕捉异步事件序列的复杂动态。现有核电站将所有历史事件输入神经网络,假设所有事件类型都有助于目标类型的预测。然而,这种假设可能是有问题的,因为在现实中,一些事件类型对另一种类型的预测没有贡献。为了纠正这一缺陷,我们学会在制定核电站过程中忽略那些对预测一个目标类型没有贡献的事件类型。为此,我们同时考虑(1)寻找有助于预测目标类型的事件类型和(2)从事件序列中学习NPP模型的任务。对于前者,我们构造了一个隐图,其中事件类型为顶点,非零贡献关系为有向边;然后,我们提出了一个概率图生成器,并从中抽取了一个潜在图。对于后者,采样图可以很容易地用作插件来修改现有的NPP模型。由于这两个任务是嵌套的,我们提出通过双层规划来优化模型参数,并基于截断梯度反向传播开发一种有效的解决方案。在合成数据集和实际数据集上的实验结果表明,在最先进的基线上,性能有所提高。这项工作在验证过程的帮助下消除了非贡献事件类型的干扰,类似于在训练机器学习模型时使用的减轻过拟合的做法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
WiseTrans: Adaptive Transport Protocol Selection for Mobile Web Service Outlier-Resilient Web Service QoS Prediction Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy Unsupervised Lifelong Learning with Curricula The Structure of Toxic Conversations on Twitter
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1