这可能是语料库语言学的下一个目标吗?利用上下文词嵌入进行半自动数据注释的方法

IF 1.1 2区 文学 0 LANGUAGE & LINGUISTICS Linguistics Vanguard Pub Date : 2024-06-24 DOI:10.1515/lingvan-2022-0142
Lauren Fonteyn, Enrique Manjavacas, Nina Haket, Aletta G. Dorst, Eva Kruijt
{"title":"这可能是语料库语言学的下一个目标吗?利用上下文词嵌入进行半自动数据注释的方法","authors":"Lauren Fonteyn, Enrique Manjavacas, Nina Haket, Aletta G. Dorst, Eva Kruijt","doi":"10.1515/lingvan-2022-0142","DOIUrl":null,"url":null,"abstract":"This paper explores how linguistic data annotation can be made (semi-)automatic by means of machine learning. More specifically, we focus on the use of “contextualized word embeddings” (i.e. vectorized representations of the meaning of word tokens based on the sentential context in which they appear) extracted by large language models (LLMs). In three example case studies, we assess how the contextualized embeddings generated by LLMs can be combined with different machine learning approaches to serve as a flexible, adaptable semi-automated data annotation tool for corpus linguists. Subsequently, to evaluate which approach is most reliable across the different case studies, we use a Bayesian framework for model comparison, which estimates the probability that the performance of a given classification approach is stronger than that of an alternative approach. Our results indicate that combining contextualized word embeddings with metric fine-tuning yield highly accurate automatic annotations.","PeriodicalId":55960,"journal":{"name":"Linguistics Vanguard","volume":"24 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings\",\"authors\":\"Lauren Fonteyn, Enrique Manjavacas, Nina Haket, Aletta G. Dorst, Eva Kruijt\",\"doi\":\"10.1515/lingvan-2022-0142\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper explores how linguistic data annotation can be made (semi-)automatic by means of machine learning. More specifically, we focus on the use of “contextualized word embeddings” (i.e. vectorized representations of the meaning of word tokens based on the sentential context in which they appear) extracted by large language models (LLMs). In three example case studies, we assess how the contextualized embeddings generated by LLMs can be combined with different machine learning approaches to serve as a flexible, adaptable semi-automated data annotation tool for corpus linguists. Subsequently, to evaluate which approach is most reliable across the different case studies, we use a Bayesian framework for model comparison, which estimates the probability that the performance of a given classification approach is stronger than that of an alternative approach. Our results indicate that combining contextualized word embeddings with metric fine-tuning yield highly accurate automatic annotations.\",\"PeriodicalId\":55960,\"journal\":{\"name\":\"Linguistics Vanguard\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Linguistics Vanguard\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/lingvan-2022-0142\",\"RegionNum\":2,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Linguistics Vanguard","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/lingvan-2022-0142","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了如何通过机器学习实现语言数据注释的(半)自动化。更具体地说,我们将重点放在使用大型语言模型(LLMs)提取的 "上下文词嵌入"(即基于词块出现的句子上下文的词义向量表示)上。在三个案例研究中,我们评估了如何将 LLM 生成的语境化嵌入与不同的机器学习方法相结合,为语料库语言学家提供灵活、可调整的半自动数据注释工具。随后,为了评估在不同的案例研究中哪种方法最可靠,我们使用贝叶斯框架进行模型比较,从而估算出特定分类方法的性能强于替代方法的概率。我们的研究结果表明,将上下文关联词嵌入与度量微调相结合可以产生高度准确的自动注释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Could this be next for corpus linguistics? Methods of semi-automatic data annotation with contextualized word embeddings
This paper explores how linguistic data annotation can be made (semi-)automatic by means of machine learning. More specifically, we focus on the use of “contextualized word embeddings” (i.e. vectorized representations of the meaning of word tokens based on the sentential context in which they appear) extracted by large language models (LLMs). In three example case studies, we assess how the contextualized embeddings generated by LLMs can be combined with different machine learning approaches to serve as a flexible, adaptable semi-automated data annotation tool for corpus linguists. Subsequently, to evaluate which approach is most reliable across the different case studies, we use a Bayesian framework for model comparison, which estimates the probability that the performance of a given classification approach is stronger than that of an alternative approach. Our results indicate that combining contextualized word embeddings with metric fine-tuning yield highly accurate automatic annotations.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.00
自引率
18.20%
发文量
105
期刊介绍: Linguistics Vanguard is a new channel for high quality articles and innovative approaches in all major fields of linguistics. This multimodal journal is published solely online and provides an accessible platform supporting both traditional and new kinds of publications. Linguistics Vanguard seeks to publish concise and up-to-date reports on the state of the art in linguistics as well as cutting-edge research papers. With its topical breadth of coverage and anticipated quick rate of production, it is one of the leading platforms for scientific exchange in linguistics. Its broad theoretical range, international scope, and diversity of article formats engage students and scholars alike. All topics within linguistics are welcome. The journal especially encourages submissions taking advantage of its new multimodal platform designed to integrate interactive content, including audio and video, images, maps, software code, raw data, and any other media that enhances the traditional written word. The novel platform and concise article format allows for rapid turnaround of submissions. Full peer review assures quality and enables authors to receive appropriate credit for their work. The journal publishes general submissions as well as special collections. Ideas for special collections may be submitted to the editors for consideration.
期刊最新文献
From sociolinguistic perception to strategic action in the study of social meaning. Sign recognition: the effect of parameters and features in sign mispronunciations. The use of the narrative final vowel -á by the Lingala-speaking youth of Kinshasa: from anterior to near/recent past Re-taking the field: resuming in-person fieldwork amid the COVID-19 pandemic Bibliographic bias and information-density sampling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1