SEBGM:基于多任务学习的句子嵌入生成模型

IF 3.1 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computer Speech and Language Pub Date : 2024-04-06 DOI:10.1016/j.csl.2024.101647
Qian Wang , Weiqi Zhang , Tianyi Lei , Yu Cao , Dezhong Peng , Xu Wang
{"title":"SEBGM:基于多任务学习的句子嵌入生成模型","authors":"Qian Wang ,&nbsp;Weiqi Zhang ,&nbsp;Tianyi Lei ,&nbsp;Yu Cao ,&nbsp;Dezhong Peng ,&nbsp;Xu Wang","doi":"10.1016/j.csl.2024.101647","DOIUrl":null,"url":null,"abstract":"<div><p>Sentence embedding, which aims to learn an effective representation of a sentence, is a significant part for downstream tasks. Recently, using contrastive learning and pre-trained model, most methods of sentence embedding achieve encouraging results. However, on the one hand, these methods utilize discrete data augmentation to obtain positive samples performing contrastive learning, which could distort the original semantic of sentences. On the other hand, most methods directly employ the contrastive frameworks of computer vision to perform contrastive learning, which could confine the contrastive training due to the discrete and sparse text data compared with image data. To solve the issues above, we design a novel contrastive framework based on generation model with multi-task learning by supervised contrastive training on the dataset of natural language inference (NLI) to obtain meaningful sentence embedding (SEBGM). SEBGM makes use of multi-task learning to enhance the usage of word-level and sentence-level semantic information of samples. In this way, the positive samples of SEBGM are from NLI rather than data augmentation. Extensive experiments show that our proposed SEBGM can advance the state-of-the-art sentence embedding on the semantic textual similarity (STS) tasks by utilizing multi-task learning.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SEBGM: Sentence Embedding Based on Generation Model with multi-task learning\",\"authors\":\"Qian Wang ,&nbsp;Weiqi Zhang ,&nbsp;Tianyi Lei ,&nbsp;Yu Cao ,&nbsp;Dezhong Peng ,&nbsp;Xu Wang\",\"doi\":\"10.1016/j.csl.2024.101647\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Sentence embedding, which aims to learn an effective representation of a sentence, is a significant part for downstream tasks. Recently, using contrastive learning and pre-trained model, most methods of sentence embedding achieve encouraging results. However, on the one hand, these methods utilize discrete data augmentation to obtain positive samples performing contrastive learning, which could distort the original semantic of sentences. On the other hand, most methods directly employ the contrastive frameworks of computer vision to perform contrastive learning, which could confine the contrastive training due to the discrete and sparse text data compared with image data. To solve the issues above, we design a novel contrastive framework based on generation model with multi-task learning by supervised contrastive training on the dataset of natural language inference (NLI) to obtain meaningful sentence embedding (SEBGM). SEBGM makes use of multi-task learning to enhance the usage of word-level and sentence-level semantic information of samples. In this way, the positive samples of SEBGM are from NLI rather than data augmentation. Extensive experiments show that our proposed SEBGM can advance the state-of-the-art sentence embedding on the semantic textual similarity (STS) tasks by utilizing multi-task learning.</p></div>\",\"PeriodicalId\":50638,\"journal\":{\"name\":\"Computer Speech and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2024-04-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Speech and Language\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0885230824000305\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230824000305","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

句子嵌入旨在学习句子的有效表征,是下游任务的重要组成部分。最近,利用对比学习和预训练模型,大多数句子嵌入方法都取得了令人鼓舞的成果。然而,一方面,这些方法利用离散数据增强来获得正样本,从而进行对比学习,这可能会扭曲句子的原始语义。另一方面,大多数方法直接利用计算机视觉的对比框架来进行对比学习,与图像数据相比,文本数据离散且稀疏,这可能会限制对比训练。为了解决上述问题,我们在自然语言推理(NLI)数据集上设计了一种基于多任务学习的生成模型的新型对比框架,通过监督对比训练来获得有意义的句子嵌入(SEBGM)。SEBGM 利用多任务学习来加强对样本的词级和句子级语义信息的利用。因此,SEBGM 的正样本来自 NLI 而非数据增强。广泛的实验表明,我们提出的 SEBGM 可以利用多任务学习,在语义文本相似性(STS)任务中推进最先进的句子嵌入技术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SEBGM: Sentence Embedding Based on Generation Model with multi-task learning

Sentence embedding, which aims to learn an effective representation of a sentence, is a significant part for downstream tasks. Recently, using contrastive learning and pre-trained model, most methods of sentence embedding achieve encouraging results. However, on the one hand, these methods utilize discrete data augmentation to obtain positive samples performing contrastive learning, which could distort the original semantic of sentences. On the other hand, most methods directly employ the contrastive frameworks of computer vision to perform contrastive learning, which could confine the contrastive training due to the discrete and sparse text data compared with image data. To solve the issues above, we design a novel contrastive framework based on generation model with multi-task learning by supervised contrastive training on the dataset of natural language inference (NLI) to obtain meaningful sentence embedding (SEBGM). SEBGM makes use of multi-task learning to enhance the usage of word-level and sentence-level semantic information of samples. In this way, the positive samples of SEBGM are from NLI rather than data augmentation. Extensive experiments show that our proposed SEBGM can advance the state-of-the-art sentence embedding on the semantic textual similarity (STS) tasks by utilizing multi-task learning.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Computer Speech and Language
Computer Speech and Language 工程技术-计算机:人工智能
CiteScore
11.30
自引率
4.70%
发文量
80
审稿时长
22.9 weeks
期刊介绍: Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language. The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.
期刊最新文献
Editorial Board Enhancing analysis of diadochokinetic speech using deep neural networks Copiously Quote Classics: Improving Chinese Poetry Generation with historical allusion knowledge Significance of chirp MFCC as a feature in speech and audio applications Artificial disfluency detection, uh no, disfluency generation for the masses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1