Rebecca M. M. Hicke, Yuri Bizzoni, Pascale Feldkamp, Ross Deans Kristensen-McLachlan
{"title":"Says Who? Effective Zero-Shot Annotation of Focalization","authors":"Rebecca M. M. Hicke, Yuri Bizzoni, Pascale Feldkamp, Ross Deans Kristensen-McLachlan","doi":"arxiv-2409.11390","DOIUrl":null,"url":null,"abstract":"Focalization, the perspective through which narrative is presented, is\nencoded via a wide range of lexico-grammatical features and is subject to\nreader interpretation. Moreover, trained readers regularly disagree on\ninterpretations, suggesting that this problem may be computationally\nintractable. In this paper, we provide experiments to test how well\ncontemporary Large Language Models (LLMs) perform when annotating literary\ntexts for focalization mode. Despite the challenging nature of the task, LLMs\nshow comparable performance to trained human annotators in our experiments. We\nprovide a case study working with the novels of Stephen King to demonstrate the\nusefulness of this approach for computational literary studies, illustrating\nhow focalization can be studied at scale.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Focalization, the perspective through which narrative is presented, is
encoded via a wide range of lexico-grammatical features and is subject to
reader interpretation. Moreover, trained readers regularly disagree on
interpretations, suggesting that this problem may be computationally
intractable. In this paper, we provide experiments to test how well
contemporary Large Language Models (LLMs) perform when annotating literary
texts for focalization mode. Despite the challenging nature of the task, LLMs
show comparable performance to trained human annotators in our experiments. We
provide a case study working with the novels of Stephen King to demonstrate the
usefulness of this approach for computational literary studies, illustrating
how focalization can be studied at scale.