Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering

IF 4.2 1区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Transactions of the Association for Computational Linguistics Pub Date : 2024-04-01 DOI:10.1162/tacl_a_00646
Dingmin Wang, Qiuyuan Huang, M. Jackson, Jianfeng Gao
{"title":"Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering","authors":"Dingmin Wang, Qiuyuan Huang, M. Jackson, Jianfeng Gao","doi":"10.1162/tacl_a_00646","DOIUrl":null,"url":null,"abstract":"Abstract An open-domain question answering (QA) system usually follows a retrieve-then-read paradigm, in which a retriever is used to retrieve relevant passages from a large corpus, and then a reader generates answers based on the retrieved passages and the original question. In this paper, we propose a simple and novel mutual learning framework to improve the performance of retrieve-then-read-style models via an intermediate module named the knowledge selector, which we train with reinforcement learning. The key benefits of our proposed intermediate module are: 1) no requirement for additional annotated question-passage pairs; 2) improvements in both retrieval and QA performance, as well as computational efficiency, compared to prior competitive retrieve-then-read models; 3) with no finetuning, improvement in the zero-shot performance of large-scale pre-trained language models, e.g., ChatGPT, by encapsulating the input with relevant knowledge without violating the input length constraint.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":4.2000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions of the Association for Computational Linguistics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1162/tacl_a_00646","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1

Abstract

Abstract An open-domain question answering (QA) system usually follows a retrieve-then-read paradigm, in which a retriever is used to retrieve relevant passages from a large corpus, and then a reader generates answers based on the retrieved passages and the original question. In this paper, we propose a simple and novel mutual learning framework to improve the performance of retrieve-then-read-style models via an intermediate module named the knowledge selector, which we train with reinforcement learning. The key benefits of our proposed intermediate module are: 1) no requirement for additional annotated question-passage pairs; 2) improvements in both retrieval and QA performance, as well as computational efficiency, compared to prior competitive retrieve-then-read models; 3) with no finetuning, improvement in the zero-shot performance of large-scale pre-trained language models, e.g., ChatGPT, by encapsulating the input with relevant knowledge without violating the input length constraint.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
检索你所需要的:开放域问题解答的互学框架
摘要 开放域问题解答(QA)系统通常采用 "检索-阅读 "模式,即使用检索器从大型语料库中检索相关段落,然后由阅读器根据检索到的段落和原始问题生成答案。在本文中,我们提出了一个简单而新颖的相互学习框架,通过一个名为 "知识选择器 "的中间模块来提高 "检索-阅读 "式模型的性能。我们提出的中间模块的主要优点是1) 不需要额外的注释问题-段落对;2) 与之前具有竞争力的检索-即读模型相比,检索和质量保证性能以及计算效率都有所提高;3) 在不违反输入长度限制的情况下,通过用相关知识封装输入,无需微调即可提高大规模预训练语言模型(如 ChatGPT)的零点性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
32.60
自引率
4.60%
发文量
58
审稿时长
8 weeks
期刊介绍: The highly regarded quarterly journal Computational Linguistics has a companion journal called Transactions of the Association for Computational Linguistics. This open access journal publishes articles in all areas of natural language processing and is an important resource for academic and industry computational linguists, natural language processing experts, artificial intelligence and machine learning investigators, cognitive scientists, speech specialists, as well as linguists and philosophers. The journal disseminates work of vital relevance to these professionals on an annual basis.
期刊最新文献
The Thai Discourse Treebank: Annotating and Classifying Thai Discourse Connectives Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies Simultaneous Selection and Adaptation of Source Data via Four-Level Optimization Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering Unifying Structured Data as Graph for Data-to-Text Pre-Training
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1