{"title":"Evaluating rhyme annotations for large corpora","authors":"Julien Baley","doi":"10.1163/19606028-bja10032","DOIUrl":null,"url":null,"abstract":"Recent methods have been proposed to produce automatic rhyme annotators for large rhymed corpora. These methods, such as Baley (2022b) greatly reduce the cost of annotating rhymed material, allowing historical linguists to focus on the analysis of the rhyme patterns. However, evidence for the quality of those annotations has been anecdotal, consisting of a handful of individual poem case studies. This paper proposes to address the issue: first, we discuss previously proposed metrics that evaluate the quality of an annotator’s output against a ground-truth annotation (List, Hill, and Foster; 2019) and we propose an alternative metric that is better suited to the task. Then, sampling from Baley’s published annotated corpus and re-annotating it by hand, we use the sample to demonstrate the lacunae in the original approach and show how to fix them. Finally, the hand-annotated sample and source code are published as additional data, so that other researchers can compare the performance of their own annotators.","PeriodicalId":35117,"journal":{"name":"Cahiers de Linguistique Asie Orientale","volume":"36 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cahiers de Linguistique Asie Orientale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/19606028-bja10032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0
Abstract
Recent methods have been proposed to produce automatic rhyme annotators for large rhymed corpora. These methods, such as Baley (2022b) greatly reduce the cost of annotating rhymed material, allowing historical linguists to focus on the analysis of the rhyme patterns. However, evidence for the quality of those annotations has been anecdotal, consisting of a handful of individual poem case studies. This paper proposes to address the issue: first, we discuss previously proposed metrics that evaluate the quality of an annotator’s output against a ground-truth annotation (List, Hill, and Foster; 2019) and we propose an alternative metric that is better suited to the task. Then, sampling from Baley’s published annotated corpus and re-annotating it by hand, we use the sample to demonstrate the lacunae in the original approach and show how to fix them. Finally, the hand-annotated sample and source code are published as additional data, so that other researchers can compare the performance of their own annotators.
最近有人提出了为大型韵文语料库制作自动韵文注释器的方法。这些方法(如 Baley (2022b))大大降低了注韵材料的成本,使历史语言学家能够专注于韵律分析。然而,关于这些注释质量的证据一直都是轶事,包括少数单首诗的个案研究。本文拟解决这一问题:首先,我们讨论了之前提出的根据地面实况注释评估注释者输出质量的指标(List, Hill, and Foster; 2019),并提出了更适合这一任务的替代指标。然后,我们从 Baley 已出版的注释语料库中抽取样本并进行手工重新注释,利用样本来展示原始方法中的缺陷,并说明如何弥补这些缺陷。最后,我们将手工标注的样本和源代码作为附加数据发布,以便其他研究人员可以比较自己的标注者的性能。
期刊介绍:
The Cahiers is an international linguistics journal whose mission is to publish new and original research on the analysis of languages of the Asian region, be they descriptive or theoretical. This clearly reflects the broad research domain of our laboratory : the Centre for Linguistic Research on East Asian Languages (CRLAO). The journal was created in 1977 by Viviane Alleton and Alain Peyraube and has been directed by three successive teams of editors, all professors based at the CRLAO in Paris. An Editorial Board, composed of scholars from around the world, assists in the reviewing process and in a consultative role.