{"title":"A lightweight approach based on prompt for few-shot relation extraction","authors":"Ying Zhang, Wencheng Huang, Depeng Dang","doi":"10.1016/j.csl.2023.101580","DOIUrl":null,"url":null,"abstract":"<div><p>Few-shot relation extraction (FSRE) aims to predict the relation between two entities in a sentence using a few annotated samples. Many works solve the FSRE problem by training complex models with a huge number of parameters, which results in longer processing times to obtain results. Some recent works focus on introducing relation information into Prototype Networks in various ways. However, most of these methods obtain entity and relation representations by fine-tuning large pre-trained language models. This implies that a copy of the complete pre-trained model needs to be saved after fine-tuning for each specific task, leading to a shortage of computing and space resources. To address this problem, in this paper, we introduce a light approach that utilizes prompt-learning to assist in fine-tuning model by adjusting fewer parameters. To obtain a better prototype of relation, we design a new enhanced fusion module to fuse relation information and original prototype. We conduct extensive experiments on the common FSRE datasets FewRel 1.0 and FewRel 2.0 to varify the advantages of our method, the results show that our model achieves state-of-the-art performance.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230823000992","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Few-shot relation extraction (FSRE) aims to predict the relation between two entities in a sentence using a few annotated samples. Many works solve the FSRE problem by training complex models with a huge number of parameters, which results in longer processing times to obtain results. Some recent works focus on introducing relation information into Prototype Networks in various ways. However, most of these methods obtain entity and relation representations by fine-tuning large pre-trained language models. This implies that a copy of the complete pre-trained model needs to be saved after fine-tuning for each specific task, leading to a shortage of computing and space resources. To address this problem, in this paper, we introduce a light approach that utilizes prompt-learning to assist in fine-tuning model by adjusting fewer parameters. To obtain a better prototype of relation, we design a new enhanced fusion module to fuse relation information and original prototype. We conduct extensive experiments on the common FSRE datasets FewRel 1.0 and FewRel 2.0 to varify the advantages of our method, the results show that our model achieves state-of-the-art performance.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.