{"title":"数字化十八世纪希伯来手稿整理:一个基于规则的解析系统,用于自动编码关键设备","authors":"Luigi Bambaci","doi":"10.1109/CiSt49399.2021.9357258","DOIUrl":null,"url":null,"abstract":"Manually encoding variant readings is a difficult and time-consuming task. Markup languages ensure data exchange and reusability but are very difficult to handle especially in the case of texts characterized by a rich textual tradition and editions with extensive critical apparatus. Scholars engaged in digitizing printed critical editions find themselves dealing with different levels of problems, including the revision of OCR outputs and the conversion from plain text to a coherent XML encoding. In this article we illustrate how it is possible to exploit the structured language of critical apparatus and the conventions of the domain of textual philology as means to automate processing and encoding. Finally we discuss the advantages deriving from the adoption of a parsing system over a manual encoding, which go from data compression to the possibility of automatically detecting inconsistencies in the printed source, of correcting errors originated after OCR processing, and of better controlling the generation of semantic errors during conversion into XML code. Our case study concerns the digitization of a collation of Hebrew manuscripts and printed editions realized by the English scholar Benjamin Kennicott in the second half of the XVIII century.","PeriodicalId":253233,"journal":{"name":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Digitizing an Eighteenth Century Collation of Hebrew Manuscripts: A Rule-Based Parsing System for Automatically Encoding Critical Apparatus\",\"authors\":\"Luigi Bambaci\",\"doi\":\"10.1109/CiSt49399.2021.9357258\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Manually encoding variant readings is a difficult and time-consuming task. Markup languages ensure data exchange and reusability but are very difficult to handle especially in the case of texts characterized by a rich textual tradition and editions with extensive critical apparatus. Scholars engaged in digitizing printed critical editions find themselves dealing with different levels of problems, including the revision of OCR outputs and the conversion from plain text to a coherent XML encoding. In this article we illustrate how it is possible to exploit the structured language of critical apparatus and the conventions of the domain of textual philology as means to automate processing and encoding. Finally we discuss the advantages deriving from the adoption of a parsing system over a manual encoding, which go from data compression to the possibility of automatically detecting inconsistencies in the printed source, of correcting errors originated after OCR processing, and of better controlling the generation of semantic errors during conversion into XML code. Our case study concerns the digitization of a collation of Hebrew manuscripts and printed editions realized by the English scholar Benjamin Kennicott in the second half of the XVIII century.\",\"PeriodicalId\":253233,\"journal\":{\"name\":\"2020 6th IEEE Congress on Information Science and Technology (CiSt)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 6th IEEE Congress on Information Science and Technology (CiSt)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CiSt49399.2021.9357258\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 6th IEEE Congress on Information Science and Technology (CiSt)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CiSt49399.2021.9357258","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Digitizing an Eighteenth Century Collation of Hebrew Manuscripts: A Rule-Based Parsing System for Automatically Encoding Critical Apparatus
Manually encoding variant readings is a difficult and time-consuming task. Markup languages ensure data exchange and reusability but are very difficult to handle especially in the case of texts characterized by a rich textual tradition and editions with extensive critical apparatus. Scholars engaged in digitizing printed critical editions find themselves dealing with different levels of problems, including the revision of OCR outputs and the conversion from plain text to a coherent XML encoding. In this article we illustrate how it is possible to exploit the structured language of critical apparatus and the conventions of the domain of textual philology as means to automate processing and encoding. Finally we discuss the advantages deriving from the adoption of a parsing system over a manual encoding, which go from data compression to the possibility of automatically detecting inconsistencies in the printed source, of correcting errors originated after OCR processing, and of better controlling the generation of semantic errors during conversion into XML code. Our case study concerns the digitization of a collation of Hebrew manuscripts and printed editions realized by the English scholar Benjamin Kennicott in the second half of the XVIII century.