{"title":"Review: Egbert and Baker (eds). 2020. Using Corpus Methods to Triangulate Linguistic Analysis","authors":"Xiaoli Fu","doi":"10.3366/cor.2021.0230","DOIUrl":null,"url":null,"abstract":"Previous research on methodological triangulation, like Baker and Egbert (2016), has mainly focussed on triangulation within corpus linguistics (CL). This timely volume presents triangulation between corpus linguistic methods and other linguistic methodologies through nine empirical studies in discourse analysis, applied linguistics and psycholinguistics. The volume consists of an introduction, nine chapters grouped into three sections, and a ‘Synthesis and Conclusion’. In the Introduction, the editors briefly introduce CL and methodological triangulation. A brief review of previous literature on triangulation between CL and other linguistic methods in the fields of discourse analysis, applied linguistics and psycholinguistics is then presented. It ends with a sequential introduction to the nine studies in the volume. Part I (Chapters 2 to 4) falls into the area of discourse analysis. To analyse text structure in a corpus of twenty-four academic lectures, in Chapter 2, Erin Schnur and Eniko Csomay employ manual/automatic segmentation and qualitative/quantitative analysis. The first approach involves manual segmentation using Mechanical Turk (MT) and qualitative coding of the 1,056 segments identified based on eight functions. The analysis here focusses on the distribution of segment functions in the texts. In the second approach, 769 Vocabulary-Based Discourse Units are automatically identified with TextTiler and then subjected to quantitative analysis, identifying four text-types of segments with similar linguistic features. Thus, the second case study focusses on the distribution of linguistic patterns in text structure to illustrate the association between language variation and pedagogical purpose. In Chapter 3, Tony McEnery, Helen Baker and Carmen Dayrell rely on an historical newspaper corpus to explore the reality of droughts in nineteenth-century Britain. To control the potential errors in the digitised","PeriodicalId":44933,"journal":{"name":"Corpora","volume":" ","pages":""},"PeriodicalIF":0.8000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Corpora","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3366/cor.2021.0230","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 0
Abstract
Previous research on methodological triangulation, like Baker and Egbert (2016), has mainly focussed on triangulation within corpus linguistics (CL). This timely volume presents triangulation between corpus linguistic methods and other linguistic methodologies through nine empirical studies in discourse analysis, applied linguistics and psycholinguistics. The volume consists of an introduction, nine chapters grouped into three sections, and a ‘Synthesis and Conclusion’. In the Introduction, the editors briefly introduce CL and methodological triangulation. A brief review of previous literature on triangulation between CL and other linguistic methods in the fields of discourse analysis, applied linguistics and psycholinguistics is then presented. It ends with a sequential introduction to the nine studies in the volume. Part I (Chapters 2 to 4) falls into the area of discourse analysis. To analyse text structure in a corpus of twenty-four academic lectures, in Chapter 2, Erin Schnur and Eniko Csomay employ manual/automatic segmentation and qualitative/quantitative analysis. The first approach involves manual segmentation using Mechanical Turk (MT) and qualitative coding of the 1,056 segments identified based on eight functions. The analysis here focusses on the distribution of segment functions in the texts. In the second approach, 769 Vocabulary-Based Discourse Units are automatically identified with TextTiler and then subjected to quantitative analysis, identifying four text-types of segments with similar linguistic features. Thus, the second case study focusses on the distribution of linguistic patterns in text structure to illustrate the association between language variation and pedagogical purpose. In Chapter 3, Tony McEnery, Helen Baker and Carmen Dayrell rely on an historical newspaper corpus to explore the reality of droughts in nineteenth-century Britain. To control the potential errors in the digitised