实现图形提示学习:调查及其他

Qingqing Long, Yuchen Yan, Peiyan Zhang, Chen Fang, Wentao Cui, Zhiyuan Ning, Meng Xiao, Ning Cao, Xiao Luo, Lingjun Xu, Shiyue Jiang, Zheng Fang, Chong Chen, Xian-Sheng Hua, Yuanchun Zhou
{"title":"实现图形提示学习:调查及其他","authors":"Qingqing Long, Yuchen Yan, Peiyan Zhang, Chen Fang, Wentao Cui, Zhiyuan Ning, Meng Xiao, Ning Cao, Xiao Luo, Lingjun Xu, Shiyue Jiang, Zheng Fang, Chong Chen, Xian-Sheng Hua, Yuanchun Zhou","doi":"arxiv-2408.14520","DOIUrl":null,"url":null,"abstract":"Large-scale \"pre-train and prompt learning\" paradigms have demonstrated\nremarkable adaptability, enabling broad applications across diverse domains\nsuch as question answering, image recognition, and multimodal retrieval. This\napproach fully leverages the potential of large-scale pre-trained models,\nreducing downstream data requirements and computational costs while enhancing\nmodel applicability across various tasks. Graphs, as versatile data structures\nthat capture relationships between entities, play pivotal roles in fields such\nas social network analysis, recommender systems, and biological graphs. Despite\nthe success of pre-train and prompt learning paradigms in Natural Language\nProcessing (NLP) and Computer Vision (CV), their application in graph domains\nremains nascent. In graph-structured data, not only do the node and edge\nfeatures often have disparate distributions, but the topological structures\nalso differ significantly. This diversity in graph data can lead to\nincompatible patterns or gaps between pre-training and fine-tuning on\ndownstream graphs. We aim to bridge this gap by summarizing methods for\nalleviating these disparities. This includes exploring prompt design\nmethodologies, comparing related techniques, assessing application scenarios\nand datasets, and identifying unresolved problems and challenges. This survey\ncategorizes over 100 relevant works in this field, summarizing general design\nprinciples and the latest applications, including text-attributed graphs,\nmolecules, proteins, and recommendation systems. Through this extensive review,\nwe provide a foundational understanding of graph prompt learning, aiming to\nimpact not only the graph mining community but also the broader Artificial\nGeneral Intelligence (AGI) community.","PeriodicalId":501032,"journal":{"name":"arXiv - CS - Social and Information Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Graph Prompt Learning: A Survey and Beyond\",\"authors\":\"Qingqing Long, Yuchen Yan, Peiyan Zhang, Chen Fang, Wentao Cui, Zhiyuan Ning, Meng Xiao, Ning Cao, Xiao Luo, Lingjun Xu, Shiyue Jiang, Zheng Fang, Chong Chen, Xian-Sheng Hua, Yuanchun Zhou\",\"doi\":\"arxiv-2408.14520\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale \\\"pre-train and prompt learning\\\" paradigms have demonstrated\\nremarkable adaptability, enabling broad applications across diverse domains\\nsuch as question answering, image recognition, and multimodal retrieval. This\\napproach fully leverages the potential of large-scale pre-trained models,\\nreducing downstream data requirements and computational costs while enhancing\\nmodel applicability across various tasks. Graphs, as versatile data structures\\nthat capture relationships between entities, play pivotal roles in fields such\\nas social network analysis, recommender systems, and biological graphs. Despite\\nthe success of pre-train and prompt learning paradigms in Natural Language\\nProcessing (NLP) and Computer Vision (CV), their application in graph domains\\nremains nascent. In graph-structured data, not only do the node and edge\\nfeatures often have disparate distributions, but the topological structures\\nalso differ significantly. This diversity in graph data can lead to\\nincompatible patterns or gaps between pre-training and fine-tuning on\\ndownstream graphs. We aim to bridge this gap by summarizing methods for\\nalleviating these disparities. This includes exploring prompt design\\nmethodologies, comparing related techniques, assessing application scenarios\\nand datasets, and identifying unresolved problems and challenges. This survey\\ncategorizes over 100 relevant works in this field, summarizing general design\\nprinciples and the latest applications, including text-attributed graphs,\\nmolecules, proteins, and recommendation systems. Through this extensive review,\\nwe provide a foundational understanding of graph prompt learning, aiming to\\nimpact not only the graph mining community but also the broader Artificial\\nGeneral Intelligence (AGI) community.\",\"PeriodicalId\":501032,\"journal\":{\"name\":\"arXiv - CS - Social and Information Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Social and Information Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.14520\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Social and Information Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.14520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

大规模的 "预训练和提示学习 "范式已经展示出了惊人的适应性,在问题解答、图像识别和多模态检索等不同领域得到了广泛应用。这种方法充分利用了大规模预训练模型的潜力,减少了下游数据需求和计算成本,同时提高了模型在各种任务中的适用性。图作为捕捉实体间关系的通用数据结构,在社交网络分析、推荐系统和生物图等领域发挥着举足轻重的作用。尽管预训练和提示学习范式在自然语言处理(NLP)和计算机视觉(CV)领域取得了成功,但它们在图领域的应用仍然刚刚起步。在图结构数据中,不仅节点和边的特征往往具有不同的分布,拓扑结构也有很大差异。图数据的这种多样性会导致预训练和下游图的微调之间出现不兼容的模式或差距。我们旨在通过总结消除这些差异的方法来弥合这一差距。这包括探索及时的设计方法、比较相关技术、评估应用场景和数据集,以及确定尚未解决的问题和挑战。本调查报告对该领域的 100 多部相关著作进行了分类,总结了一般设计原理和最新应用,包括文本归属图、分子、蛋白质和推荐系统。通过这篇广泛的综述,我们提供了对图提示学习的基础性理解,旨在不仅影响图挖掘领域,而且影响更广泛的人工智能(AGI)领域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Towards Graph Prompt Learning: A Survey and Beyond
Large-scale "pre-train and prompt learning" paradigms have demonstrated remarkable adaptability, enabling broad applications across diverse domains such as question answering, image recognition, and multimodal retrieval. This approach fully leverages the potential of large-scale pre-trained models, reducing downstream data requirements and computational costs while enhancing model applicability across various tasks. Graphs, as versatile data structures that capture relationships between entities, play pivotal roles in fields such as social network analysis, recommender systems, and biological graphs. Despite the success of pre-train and prompt learning paradigms in Natural Language Processing (NLP) and Computer Vision (CV), their application in graph domains remains nascent. In graph-structured data, not only do the node and edge features often have disparate distributions, but the topological structures also differ significantly. This diversity in graph data can lead to incompatible patterns or gaps between pre-training and fine-tuning on downstream graphs. We aim to bridge this gap by summarizing methods for alleviating these disparities. This includes exploring prompt design methodologies, comparing related techniques, assessing application scenarios and datasets, and identifying unresolved problems and challenges. This survey categorizes over 100 relevant works in this field, summarizing general design principles and the latest applications, including text-attributed graphs, molecules, proteins, and recommendation systems. Through this extensive review, we provide a foundational understanding of graph prompt learning, aiming to impact not only the graph mining community but also the broader Artificial General Intelligence (AGI) community.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
My Views Do Not Reflect Those of My Employer: Differences in Behavior of Organizations' Official and Personal Social Media Accounts A novel DFS/BFS approach towards link prediction Community Shaping in the Digital Age: A Temporal Fusion Framework for Analyzing Discourse Fragmentation in Online Social Networks Skill matching at scale: freelancer-project alignment for efficient multilingual candidate retrieval "It Might be Technically Impressive, But It's Practically Useless to Us": Practices, Challenges, and Opportunities for Cross-Functional Collaboration around AI within the News Industry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1