{"title":"通过大型文本语料库量化文字差异","authors":"Hanna Lüschow","doi":"10.1515/zfs-2021-2038","DOIUrl":null,"url":null,"abstract":"Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.","PeriodicalId":43494,"journal":{"name":"Zeitschrift Fur Sprachwissenschaft","volume":"40 1","pages":"421 - 440"},"PeriodicalIF":0.6000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Quantifying graphemic variation via large text corpora\",\"authors\":\"Hanna Lüschow\",\"doi\":\"10.1515/zfs-2021-2038\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.\",\"PeriodicalId\":43494,\"journal\":{\"name\":\"Zeitschrift Fur Sprachwissenschaft\",\"volume\":\"40 1\",\"pages\":\"421 - 440\"},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Zeitschrift Fur Sprachwissenschaft\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/zfs-2021-2038\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Zeitschrift Fur Sprachwissenschaft","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/zfs-2021-2038","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
Quantifying graphemic variation via large text corpora
Abstract The use of some basic computer science concepts could expand the possibilities of (manual) graphematic text corpus analysis. With these it can be shown that graphematic variation decreases constantly in printed German texts from 1600 to 1900. While the variability is continuously lesser on a text-internal level, it decreases faster for the whole available writing system of individual decades. But which changes took place exactly? Which types of variation went away more quickly, which ones persisted? How do we deal with large amounts of data which cannot be processed manually anymore? Which aspects are of special importance or go missing while working with a large textual base? The use of a measurement called entropy quantifies the variability of the spellings of a given word form, lemma, text or subcorpus, with few restrictions but also less details in the results. The difference between two spellings can be measured via Damerau-Levenshtein distance. To a certain degree, automated data handling can also determine the exact changes that took place. Afterwards, these differences can be counted and ranked. As data source the German Text Archive of the Berlin-Brandenburg Academy of Sciences and Humanities is used. It offers for example orthographic normalization – which is extremely useful –, preprocessing of parts of speech and lemmatization. As opposed to many other approaches the establishment of today’s normed spellings is not seen as the aim of the developments and is therefore not the focus of the research. Instead, the differences between individual spellings are of interest. Afterwards intra- and extralinguistic factors which caused these developments should be determined. These methodological findings could subsequently be used for improving research methods in other graphematic fields of interest, e. g. computer-mediated communication.
期刊介绍:
The aim of the journal is to promote linguistic research by publishing high-quality contributions and thematic special issues from all fields and trends of modern linguistics. In addition to articles and reviews, the journal also features contributions to discussions on current controversies in the field as well as overview articles outlining the state-of-the art of relevant research paradigms. Topics: -General Linguistics -Language Typology -Language acquisition, language change and synchronic variation -Empirical linguistics: experimental and corpus-based research -Contributions to theory-building