Semih Sevim, Ekin Ekinci, S. İ. Omurca, Eren Berk Edinç, S. Eken, Türkücan Erdem, A. Sayar
{"title":"Multi-Class Document Image Classification using Deep Visual and Textual Features","authors":"Semih Sevim, Ekin Ekinci, S. İ. Omurca, Eren Berk Edinç, S. Eken, Türkücan Erdem, A. Sayar","doi":"10.1142/s1469026822500134","DOIUrl":null,"url":null,"abstract":"The digitalization era has brought digital documents with it, and the classification of document images has become an important need as in classical text documents. Document images, in which text documents are stored as images, contain both text and visual features, unlike images. Therefore, it is possible to use both text and visual features while classifying such data. Considering this situation, in this study, it is aimed to classify document images by using both text and visual features and to determine which feature type is more successful in classification. In the text-based approach, each document/class is labeled with the keywords associated with that document/class and the classification is realized according to whether the document contains the related key-words or not. For visual-based classification, we use four deep learning models namely CNN, NASNet-Large, InceptionV3, and EfficientNetB3. Experimental study is carried out on document images obtained from applicants of the Kocaeli University. As a result, it is seen ii that EfficientNetB3 is the most superior among all with 0.8987 F-score.","PeriodicalId":422521,"journal":{"name":"Int. J. Comput. Intell. Appl.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Comput. Intell. Appl.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s1469026822500134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The digitalization era has brought digital documents with it, and the classification of document images has become an important need as in classical text documents. Document images, in which text documents are stored as images, contain both text and visual features, unlike images. Therefore, it is possible to use both text and visual features while classifying such data. Considering this situation, in this study, it is aimed to classify document images by using both text and visual features and to determine which feature type is more successful in classification. In the text-based approach, each document/class is labeled with the keywords associated with that document/class and the classification is realized according to whether the document contains the related key-words or not. For visual-based classification, we use four deep learning models namely CNN, NASNet-Large, InceptionV3, and EfficientNetB3. Experimental study is carried out on document images obtained from applicants of the Kocaeli University. As a result, it is seen ii that EfficientNetB3 is the most superior among all with 0.8987 F-score.