Jianzhou Feng , Qin Wang , Huaxiao Qiu , Lirong Liu
{"title":"Retrieval In Decoder benefits generative models for explainable complex question answering","authors":"Jianzhou Feng , Qin Wang , Huaxiao Qiu , Lirong Liu","doi":"10.1016/j.neunet.2024.106833","DOIUrl":null,"url":null,"abstract":"<div><div>Large-scale Language Models (LLMs) utilizing the Chain-of-Thought prompting demonstrate exceptional performance in a variety of tasks. However, the persistence of factual hallucinations remains a significant challenge in practical applications. Prevailing retrieval-augmented methods treat the retriever and generator as separate components, which inadvertently restricts the generator’s capabilities to those of the retriever through intensive supervised training. In this work, we propose an unsupervised Retrieval In Decoder framework for multi-granularity decoding called <em>RID</em>, which integrates retrieval directly into the decoding process of generative models. It dynamically adjusts decoding granularity based on retrieval outcomes, and duly corrects the decoding direction through its direct impact on the next token. Moreover, we introduce a reinforcement learning-driven knowledge distillation method for adaptive explanation generation to better apply to Small-scale Language Models (SLMs). The experimental results across six public benchmarks surpass popular LLMs and existing retrieval-augmented methods, which demonstrates the effectiveness of RID in models of different scales and verifies its applicability and scalability.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106833"},"PeriodicalIF":6.0000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024007573","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large-scale Language Models (LLMs) utilizing the Chain-of-Thought prompting demonstrate exceptional performance in a variety of tasks. However, the persistence of factual hallucinations remains a significant challenge in practical applications. Prevailing retrieval-augmented methods treat the retriever and generator as separate components, which inadvertently restricts the generator’s capabilities to those of the retriever through intensive supervised training. In this work, we propose an unsupervised Retrieval In Decoder framework for multi-granularity decoding called RID, which integrates retrieval directly into the decoding process of generative models. It dynamically adjusts decoding granularity based on retrieval outcomes, and duly corrects the decoding direction through its direct impact on the next token. Moreover, we introduce a reinforcement learning-driven knowledge distillation method for adaptive explanation generation to better apply to Small-scale Language Models (SLMs). The experimental results across six public benchmarks surpass popular LLMs and existing retrieval-augmented methods, which demonstrates the effectiveness of RID in models of different scales and verifies its applicability and scalability.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.