{"title":"Omni-font character recognition using templates and neural networks","authors":"S. Mostert, J.A. Brand","doi":"10.1109/COMSIG.1992.274275","DOIUrl":null,"url":null,"abstract":"With regard to facsimile graphic pages, routines to extract character images from within the page are implemented. Methods to trace joinings in closely spaced letters are discussed. Preprocessing of the extracted image by skeleton extraction (using average area) is implemented to remove font specific factors such as bold and line thickening. After specification of the reduced image size required, the image is compressed with the necessary amount by a pixel averaging and overlapping routine for better context sensitivity. The reduced images are used to train multiple MLP neural networks each for a single font using the back propagation training algorithm. The outputs of the networks are combined to form a maximum likelihood search for the best match. Results close to 100% are obtainable.<<ETX>>","PeriodicalId":342857,"journal":{"name":"Proceedings of the 1992 South African Symposium on Communications and Signal Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1992 South African Symposium on Communications and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMSIG.1992.274275","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With regard to facsimile graphic pages, routines to extract character images from within the page are implemented. Methods to trace joinings in closely spaced letters are discussed. Preprocessing of the extracted image by skeleton extraction (using average area) is implemented to remove font specific factors such as bold and line thickening. After specification of the reduced image size required, the image is compressed with the necessary amount by a pixel averaging and overlapping routine for better context sensitivity. The reduced images are used to train multiple MLP neural networks each for a single font using the back propagation training algorithm. The outputs of the networks are combined to form a maximum likelihood search for the best match. Results close to 100% are obtainable.<>