Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning

Yukang Lin, Bingchen Zhong, Shuoran Jiang, Joanna Siebert, Qingcai Chen
{"title":"Reasoning Graph Enhanced Exemplars Retrieval for In-Context Learning","authors":"Yukang Lin, Bingchen Zhong, Shuoran Jiang, Joanna Siebert, Qingcai Chen","doi":"arxiv-2409.11147","DOIUrl":null,"url":null,"abstract":"Large language models(LLMs) have exhibited remarkable few-shot learning\ncapabilities and unified the paradigm of NLP tasks through the in-context\nlearning(ICL) technique. Despite the success of ICL, the quality of the\nexemplar demonstrations can significantly influence the LLM's performance.\nExisting exemplar selection methods mainly focus on the semantic similarity\nbetween queries and candidate exemplars. On the other hand, the logical\nconnections between reasoning steps can be beneficial to depict the\nproblem-solving process as well. In this paper, we proposes a novel method\nnamed Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM\nto generate an initial response, then expresses intermediate problem-solving\nsteps to a graph structure. After that, it employs graph kernel to select\nexemplars with semantic and structural similarity. Extensive experiments\ndemonstrate the structural relationship is helpful to the alignment of queries\nand candidate exemplars. The efficacy of RGER on math and logit reasoning tasks\nshowcases its superiority over state-of-the-art retrieval-based approaches. Our\ncode is released at https://github.com/Yukang-Lin/RGER.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11147","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models(LLMs) have exhibited remarkable few-shot learning capabilities and unified the paradigm of NLP tasks through the in-context learning(ICL) technique. Despite the success of ICL, the quality of the exemplar demonstrations can significantly influence the LLM's performance. Existing exemplar selection methods mainly focus on the semantic similarity between queries and candidate exemplars. On the other hand, the logical connections between reasoning steps can be beneficial to depict the problem-solving process as well. In this paper, we proposes a novel method named Reasoning Graph-enhanced Exemplar Retrieval(RGER). RGER first quires LLM to generate an initial response, then expresses intermediate problem-solving steps to a graph structure. After that, it employs graph kernel to select exemplars with semantic and structural similarity. Extensive experiments demonstrate the structural relationship is helpful to the alignment of queries and candidate exemplars. The efficacy of RGER on math and logit reasoning tasks showcases its superiority over state-of-the-art retrieval-based approaches. Our code is released at https://github.com/Yukang-Lin/RGER.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
推理图增强范例检索,促进情境学习
大型语言模型(LLMs)通过上下文学习(ICL)技术展示了卓越的少量学习能力,并统一了 NLP 任务的范式。现有的示例选择方法主要关注查询与候选示例之间的语义相似性。另一方面,推理步骤之间的逻辑联系也有利于描述问题的解决过程。本文提出了一种名为推理图增强示例检索(RGER)的新方法。RGER 首先要求 LLM 生成初始响应,然后将中间的问题解决步骤表达为图结构。之后,它利用图核来选择具有语义和结构相似性的示例。大量实验证明,结构关系有助于查询和候选示例的匹配。RGER 在数学和对数推理任务中的功效表明,它优于最先进的基于检索的方法。我们的代码发布于 https://github.com/Yukang-Lin/RGER。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1