{"title":"DNN-HHOA:从复合文档图像中提取基于深度神经网络优化的表格数据","authors":"Devendra Tiwari, Anand Gupta, Rituraj Soni","doi":"10.1142/s021946782550010x","DOIUrl":null,"url":null,"abstract":"Text information extraction from a tabular structure within a compound document image (CDI) is crucial to help better understand the document. The main objective of text extraction is to extract only helpful information since tabular data represents the relation between text lying in a tuple. Text from an image may be of low contrast, different style, size, alignment, orientation, and complex background. This work presents a three-step tabular text extraction process, including pre-processing, separation, and extraction. The pre-processing step uses the guide image filter to remove various kinds of noise from the image. Improved binomial thresholding (IBT) separates the text from the image. Then the tabular text is recognized and extracted from CDI using deep neural network (DNN). In this work, weights of DNN layers are optimized by the Harris Hawk optimization algorithm (HHOA). Obtained text and associated information can be used in many ways, including replicating the document in digital format, information retrieval, and text summarization. The proposed process is applied comprehensively to UNLV, TableBank, and ICDAR 2013 image datasets. The complete procedure is implemented in Python, and precision metrics performance is verified.","PeriodicalId":44688,"journal":{"name":"International Journal of Image and Graphics","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DNN-HHOA: Deep Neural Network Optimization-Based Tabular Data Extraction from Compound Document Images\",\"authors\":\"Devendra Tiwari, Anand Gupta, Rituraj Soni\",\"doi\":\"10.1142/s021946782550010x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Text information extraction from a tabular structure within a compound document image (CDI) is crucial to help better understand the document. The main objective of text extraction is to extract only helpful information since tabular data represents the relation between text lying in a tuple. Text from an image may be of low contrast, different style, size, alignment, orientation, and complex background. This work presents a three-step tabular text extraction process, including pre-processing, separation, and extraction. The pre-processing step uses the guide image filter to remove various kinds of noise from the image. Improved binomial thresholding (IBT) separates the text from the image. Then the tabular text is recognized and extracted from CDI using deep neural network (DNN). In this work, weights of DNN layers are optimized by the Harris Hawk optimization algorithm (HHOA). Obtained text and associated information can be used in many ways, including replicating the document in digital format, information retrieval, and text summarization. The proposed process is applied comprehensively to UNLV, TableBank, and ICDAR 2013 image datasets. The complete procedure is implemented in Python, and precision metrics performance is verified.\",\"PeriodicalId\":44688,\"journal\":{\"name\":\"International Journal of Image and Graphics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.8000,\"publicationDate\":\"2024-01-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Image and Graphics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/s021946782550010x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s021946782550010x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
DNN-HHOA: Deep Neural Network Optimization-Based Tabular Data Extraction from Compound Document Images
Text information extraction from a tabular structure within a compound document image (CDI) is crucial to help better understand the document. The main objective of text extraction is to extract only helpful information since tabular data represents the relation between text lying in a tuple. Text from an image may be of low contrast, different style, size, alignment, orientation, and complex background. This work presents a three-step tabular text extraction process, including pre-processing, separation, and extraction. The pre-processing step uses the guide image filter to remove various kinds of noise from the image. Improved binomial thresholding (IBT) separates the text from the image. Then the tabular text is recognized and extracted from CDI using deep neural network (DNN). In this work, weights of DNN layers are optimized by the Harris Hawk optimization algorithm (HHOA). Obtained text and associated information can be used in many ways, including replicating the document in digital format, information retrieval, and text summarization. The proposed process is applied comprehensively to UNLV, TableBank, and ICDAR 2013 image datasets. The complete procedure is implemented in Python, and precision metrics performance is verified.