GINTRIP: Interpretable Temporal Graph Regression using Information bottleneck and Prototype-based method

Ali Royat, Seyed Mohamad Moghadas, Lesley De Cruz, Adrian Munteanu
{"title":"GINTRIP: Interpretable Temporal Graph Regression using Information bottleneck and Prototype-based method","authors":"Ali Royat, Seyed Mohamad Moghadas, Lesley De Cruz, Adrian Munteanu","doi":"arxiv-2409.10996","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have demonstrated remarkable performance across\nvarious domains, yet their application to temporal graph regression tasks faces\nsignificant challenges regarding interpretability. This critical issue, rooted\nin the inherent complexity of both DNNs and underlying spatio-temporal patterns\nin the graph, calls for innovative solutions. While interpretability concerns\nin Graph Neural Networks (GNNs) mirror those of DNNs, to the best of our\nknowledge, no notable work has addressed the interpretability of temporal GNNs\nusing a combination of Information Bottleneck (IB) principles and\nprototype-based methods. Our research introduces a novel approach that uniquely\nintegrates these techniques to enhance the interpretability of temporal graph\nregression models. The key contributions of our work are threefold: We\nintroduce the \\underline{G}raph \\underline{IN}terpretability in\n\\underline{T}emporal \\underline{R}egression task using \\underline{I}nformation\nbottleneck and \\underline{P}rototype (GINTRIP) framework, the first combined\napplication of IB and prototype-based methods for interpretable temporal graph\ntasks. We derive a novel theoretical bound on mutual information (MI),\nextending the applicability of IB principles to graph regression tasks. We\nincorporate an unsupervised auxiliary classification head, fostering multi-task\nlearning and diverse concept representation, which enhances the model\nbottleneck's interpretability. Our model is evaluated on real-world traffic\ndatasets, outperforming existing methods in both forecasting accuracy and\ninterpretability-related metrics.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10996","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks (DNNs) have demonstrated remarkable performance across various domains, yet their application to temporal graph regression tasks faces significant challenges regarding interpretability. This critical issue, rooted in the inherent complexity of both DNNs and underlying spatio-temporal patterns in the graph, calls for innovative solutions. While interpretability concerns in Graph Neural Networks (GNNs) mirror those of DNNs, to the best of our knowledge, no notable work has addressed the interpretability of temporal GNNs using a combination of Information Bottleneck (IB) principles and prototype-based methods. Our research introduces a novel approach that uniquely integrates these techniques to enhance the interpretability of temporal graph regression models. The key contributions of our work are threefold: We introduce the \underline{G}raph \underline{IN}terpretability in \underline{T}emporal \underline{R}egression task using \underline{I}nformation bottleneck and \underline{P}rototype (GINTRIP) framework, the first combined application of IB and prototype-based methods for interpretable temporal graph tasks. We derive a novel theoretical bound on mutual information (MI), extending the applicability of IB principles to graph regression tasks. We incorporate an unsupervised auxiliary classification head, fostering multi-task learning and diverse concept representation, which enhances the model bottleneck's interpretability. Our model is evaluated on real-world traffic datasets, outperforming existing methods in both forecasting accuracy and interpretability-related metrics.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GINTRIP:利用信息瓶颈和基于原型的方法实现可解释的时序图回归
深度神经网络(DNN)在各个领域都表现出了卓越的性能,但将其应用于时序图回归任务却面临着可解释性方面的重大挑战。这一关键问题的根源在于 DNN 和图中潜在时空模式的内在复杂性,因此需要创新的解决方案。虽然图神经网络(GNN)中的可解释性问题与 DNNs 的问题如出一辙,但据我们所知,还没有哪项著名研究结合信息瓶颈(IB)原理和基于原型的方法来解决时态 GNNs 的可解释性问题。我们的研究引入了一种新方法,独特地整合了这些技术,以增强时态图回归模型的可解释性。我们工作的主要贡献有三个方面:我们使用信息瓶颈和原型(GINTRIP)框架引入了时态图回归任务中的(underline{G}raph)(underline{T}emporal)(underline{R}egression)可解释性,这是首次将基于信息瓶颈和原型的方法结合应用于可解释的时态图任务。我们推导出了互信息(MI)的新理论约束,将 IB 原则的适用性扩展到了图回归任务。我们加入了无监督辅助分类头,促进了多任务学习和多样化概念表示,从而增强了模型瓶颈的可解释性。我们的模型在实际交通数据集上进行了评估,在预测准确性和可解释性相关指标上都优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1