ExSPIN: Explicit Feedback-Based Self-Play Fine-Tuning for Text-to-SQL Parsing.

IF 2 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Entropy Pub Date : 2025-02-25 DOI:10.3390/e27030235
Liang Yan, Jinhang Su, Chuanyi Liu, Shaoming Duan, Yuhao Zhang, Jianhang Li, Peiyi Han, Ye Liu
{"title":"ExSPIN: Explicit Feedback-Based Self-Play Fine-Tuning for Text-to-SQL Parsing.","authors":"Liang Yan, Jinhang Su, Chuanyi Liu, Shaoming Duan, Yuhao Zhang, Jianhang Li, Peiyi Han, Ye Liu","doi":"10.3390/e27030235","DOIUrl":null,"url":null,"abstract":"<p><p>Recently, self-play fine-tuning (SPIN) has garnered widespread attention as it enables large language models (LLMs) to iteratively enhance their capabilities through simulated interactions with themselves, transforming a weak LLM into a strong one. However, applying SPIN to fine-tune text-to-SQL models presents substantial challenges. Notably, existing frameworks lack clear signal feedback during the training process and fail to adequately capture the implicit schema-linking characteristics between natural language questions and databases. To address these issues, we propose a novel self-play fine-tuning method for text-to-SQL models, termed ExSPIN, which incorporates explicit feedback. Specifically, during fine-tuning, the SQL query execution results predicted by the LLM are fed back into the model's parameter update process. This feedback allows both the main player and the opponent to more accurately distinguish between negative and positive samples, thereby improving the fine-tuning outcomes. Additionally, we employ in-context learning techniques to provide explicit schema hints, enabling the LLM to better understand the schema-linking between the database and natural language queries during the self-play process. Evaluations on two real-world datasets show that our method significantly outperforms the state-of-the-art approaches.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"27 3","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11940967/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Entropy","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.3390/e27030235","RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHYSICS, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, self-play fine-tuning (SPIN) has garnered widespread attention as it enables large language models (LLMs) to iteratively enhance their capabilities through simulated interactions with themselves, transforming a weak LLM into a strong one. However, applying SPIN to fine-tune text-to-SQL models presents substantial challenges. Notably, existing frameworks lack clear signal feedback during the training process and fail to adequately capture the implicit schema-linking characteristics between natural language questions and databases. To address these issues, we propose a novel self-play fine-tuning method for text-to-SQL models, termed ExSPIN, which incorporates explicit feedback. Specifically, during fine-tuning, the SQL query execution results predicted by the LLM are fed back into the model's parameter update process. This feedback allows both the main player and the opponent to more accurately distinguish between negative and positive samples, thereby improving the fine-tuning outcomes. Additionally, we employ in-context learning techniques to provide explicit schema hints, enabling the LLM to better understand the schema-linking between the database and natural language queries during the self-play process. Evaluations on two real-world datasets show that our method significantly outperforms the state-of-the-art approaches.

Abstract Image

Abstract Image

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
ExSPIN:用于文本到sql解析的基于显式反馈的自播放微调。
最近,自适应微调(SPIN)引起了广泛关注,因为它能让大型语言模型(LLM)通过模拟与自身的交互,迭代增强自身能力,从而将弱型 LLM 转变为强型 LLM。然而,应用 SPIN 对文本到 SQL 模型进行微调面临着巨大的挑战。值得注意的是,现有框架在训练过程中缺乏明确的信号反馈,也未能充分捕捉到自然语言问题与数据库之间隐含的模式链接特性。为了解决这些问题,我们提出了一种新颖的文本到 SQL 模型的自我微调方法,称为 ExSPIN,其中包含明确的反馈。具体来说,在微调过程中,LLM 预测的 SQL 查询执行结果会反馈到模型的参数更新过程中。这种反馈能让主玩家和对手更准确地区分负面和正面样本,从而改善微调结果。此外,我们还采用了上下文学习技术来提供明确的模式提示,使 LLM 能够在自我游戏过程中更好地理解数据库与自然语言查询之间的模式联系。在两个实际数据集上进行的评估表明,我们的方法明显优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Entropy
Entropy PHYSICS, MULTIDISCIPLINARY-
CiteScore
4.90
自引率
11.10%
发文量
1580
审稿时长
21.05 days
期刊介绍: Entropy (ISSN 1099-4300), an international and interdisciplinary journal of entropy and information studies, publishes reviews, regular research papers and short notes. Our aim is to encourage scientists to publish as much as possible their theoretical and experimental details. There is no restriction on the length of the papers. If there are computation and the experiment, the details must be provided so that the results can be reproduced.
期刊最新文献
The Capacity Gains of Gaussian Channels with Unstable Versus Stable Autoregressive Noise. GAME-YOLO: Global Attention and Multi-Scale Enhancement for Low-Visibility UAV Detection with Sub-Pixel Localization. Transfer Irreversibilities in the Lenoir Cycle: FTT Design Criteria with ε-NTU. Comprehension as Purification in Reading. A Multi-Feature Fusion-Based Two-Stage Method for Airport Crater Extraction from Remote Sensing Images.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1