MLLabs-LIG在TempoWiC 2022:一种用于检查时间意义转移的生成方法

Chenyang Lyu, Yongxin Zhou, Tianbo Ji
{"title":"MLLabs-LIG在TempoWiC 2022:一种用于检查时间意义转移的生成方法","authors":"Chenyang Lyu, Yongxin Zhou, Tianbo Ji","doi":"10.18653/v1/2022.evonlp-1.1","DOIUrl":null,"url":null,"abstract":"In this paper, we present our system for the EvoNLP 2022 shared task Temporal Meaning Shift (TempoWiC). Different from the typically used discriminative model, we propose a generative approach based on pre-trained generation models. The basic architecture of our system is a seq2seq model where the input sequence consists of two documents followed by a question asking whether the meaning of target word changed or not, the target output sequence is a declarative sentence describing the meaning of target word changed or not. The experimental results on TempoWiC test set show that our best system (with time information) obtained an accuracy and Marco F-1 score of 68.09% and 62.59% respectively, which ranked 12th among all submitted systems. The results have shown the plausibility of using generation model for WiC tasks, meanwhile also indicate there’s still room for further improvement.","PeriodicalId":158578,"journal":{"name":"Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift\",\"authors\":\"Chenyang Lyu, Yongxin Zhou, Tianbo Ji\",\"doi\":\"10.18653/v1/2022.evonlp-1.1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present our system for the EvoNLP 2022 shared task Temporal Meaning Shift (TempoWiC). Different from the typically used discriminative model, we propose a generative approach based on pre-trained generation models. The basic architecture of our system is a seq2seq model where the input sequence consists of two documents followed by a question asking whether the meaning of target word changed or not, the target output sequence is a declarative sentence describing the meaning of target word changed or not. The experimental results on TempoWiC test set show that our best system (with time information) obtained an accuracy and Marco F-1 score of 68.09% and 62.59% respectively, which ranked 12th among all submitted systems. The results have shown the plausibility of using generation model for WiC tasks, meanwhile also indicate there’s still room for further improvement.\",\"PeriodicalId\":158578,\"journal\":{\"name\":\"Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP)\",\"volume\":\"59 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.18653/v1/2022.evonlp-1.1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the The First Workshop on Ever Evolving NLP (EvoNLP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.18653/v1/2022.evonlp-1.1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

在本文中,我们提出了EvoNLP 2022共享任务时间意义转换(TempoWiC)的系统。与通常使用的判别模型不同,我们提出了一种基于预训练生成模型的生成方法。我们的系统的基本架构是一个seq2seq模型,其中输入序列由两个文档组成,后面跟着一个询问目标单词的含义是否改变的问题,目标输出序列是一个描述目标单词的含义是否改变的陈述句。TempoWiC测试集上的实验结果表明,我们的最佳系统(含时间信息)的准确率和Marco F-1分数分别为68.09%和62.59%,在所有提交的系统中排名第12位。研究结果表明了生成模型在WiC任务中的可行性,同时也表明了该模型仍有进一步改进的空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift
In this paper, we present our system for the EvoNLP 2022 shared task Temporal Meaning Shift (TempoWiC). Different from the typically used discriminative model, we propose a generative approach based on pre-trained generation models. The basic architecture of our system is a seq2seq model where the input sequence consists of two documents followed by a question asking whether the meaning of target word changed or not, the target output sequence is a declarative sentence describing the meaning of target word changed or not. The experimental results on TempoWiC test set show that our best system (with time information) obtained an accuracy and Marco F-1 score of 68.09% and 62.59% respectively, which ranked 12th among all submitted systems. The results have shown the plausibility of using generation model for WiC tasks, meanwhile also indicate there’s still room for further improvement.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Leveraging time-dependent lexical features for offensive language detection MLLabs-LIG at TempoWiC 2022: A Generative Approach for Examining Temporal Meaning Shift Class Incremental Learning for Intent Classification with Limited or No Old Data HSE at TempoWiC: Detecting Meaning Shift in Social Media with Diachronic Language Models CC-Top: Constrained Clustering for Dynamic Topic Discovery
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1