通过基于反强化学习的链接预测揭示人类社会行为动机

IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, THEORY & METHODS Computing Pub Date : 2024-04-02 DOI:10.1007/s00607-024-01279-w
Xin Jiang, Hongbo Liu, Liping Yang, Bo Zhang, Tomas E. Ward, Václav Snášel
{"title":"通过基于反强化学习的链接预测揭示人类社会行为动机","authors":"Xin Jiang, Hongbo Liu, Liping Yang, Bo Zhang, Tomas E. Ward, Václav Snášel","doi":"10.1007/s00607-024-01279-w","DOIUrl":null,"url":null,"abstract":"<p>Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.\n</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unraveling human social behavior motivations via inverse reinforcement learning-based link prediction\",\"authors\":\"Xin Jiang, Hongbo Liu, Liping Yang, Bo Zhang, Tomas E. Ward, Václav Snášel\",\"doi\":\"10.1007/s00607-024-01279-w\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.\\n</p>\",\"PeriodicalId\":10718,\"journal\":{\"name\":\"Computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00607-024-01279-w\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00607-024-01279-w","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

摘要

链接预测旨在捕捉网络结构的演变,尤其是在真实社交网络中,这有利于好友推荐、人际接触轨迹模拟等。然而,这类网络中的随机社交行为和不稳定的时空分布往往会导致无法解释和不准确的链接预测。因此,我们从模仿学习在模拟人类驾驶行为方面的成功经验中得到启发,提出了一种基于逆强化学习(DN-IRL)的动态网络链接预测方法,以揭示社交网络中社交行为背后的动机。具体来说,历史社交行为(链接序列)和下一个行为(单个链接)分别被视为当前环境状态和代理采取的行动。随后,我们会优化奖励函数,使原始数据中专家行为的累积预期奖励最大化,并利用奖励函数来学习代理的社交策略。此外,我们的方法还结合了基于邻域结构的节点嵌入和自我关注模块,从而实现了对网络结构的敏感性和对预测链接的可追溯性。在真实世界动态社交网络上的实验结果表明,与基线方法相比,DN-IRL 的预测更准确、更可解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Unraveling human social behavior motivations via inverse reinforcement learning-based link prediction

Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computing
Computing 工程技术-计算机:理论方法
CiteScore
8.20
自引率
2.70%
发文量
107
审稿时长
3 months
期刊介绍: Computing publishes original papers, short communications and surveys on all fields of computing. The contributions should be written in English and may be of theoretical or applied nature, the essential criteria are computational relevance and systematic foundation of results.
期刊最新文献
Mapping and just-in-time traffic congestion mitigation for emergency vehicles in smart cities Fog intelligence for energy efficient management in smart street lamps Contextual authentication of users and devices using machine learning Multi-objective service composition optimization problem in IoT for agriculture 4.0 Robust evaluation of GPU compute instances for HPC and AI in the cloud: a TOPSIS approach with sensitivity, bootstrapping, and non-parametric analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1