Analysis of handwritten document images is one of the key areas of research in image processing domain. The objective of the analysis is to recognize the text components in an image and extract the intended information. However, inscription of handwriting usually would be on documents with rule lines, since they act as guide lines to the writer to ensure the writing remains straight and is of uniform size. These lines make the task of recognition difficult and hence removing them automatically becomes a major issue in text image processing. To accomplish this objective, an attempt is being made in this paper to remove the horizontal rule lines and vertical margin line for efficient recognition and analysis of the foreground text. Using mathematical morphology, predominant horizontal and vertical lines are removed leaving out stray lines which hinder the further processing of text. The stray lines are identified and removed using entropy with sliding window based on dynamic thresholding.
{"title":"Rule Line Detection and Removal in Handwritten Text Images","authors":"Syed Imtiaz, P. Nagabhushan, S. D. Gowda","doi":"10.1109/ICSIP.2014.55","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.55","url":null,"abstract":"Analysis of handwritten document images is one of the key areas of research in image processing domain. The objective of the analysis is to recognize the text components in an image and extract the intended information. However, inscription of handwriting usually would be on documents with rule lines, since they act as guide lines to the writer to ensure the writing remains straight and is of uniform size. These lines make the task of recognition difficult and hence removing them automatically becomes a major issue in text image processing. To accomplish this objective, an attempt is being made in this paper to remove the horizontal rule lines and vertical margin line for efficient recognition and analysis of the foreground text. Using mathematical morphology, predominant horizontal and vertical lines are removed leaving out stray lines which hinder the further processing of text. The stray lines are identified and removed using entropy with sliding window based on dynamic thresholding.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114650232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design, development and validation of vision based autonomous robotic system for military applications. Sum of Absolute Difference (SAD) algorithm is used for the implementation of the proposed image processing algorithm. It works on the principle of image subtraction. The developed algorithm is validated in real time by change-based moving object detection method. The novelty of this work is the application of the developed autonomous robot for the detection of mines in the war field. Developed algorithm is validated both in offline using MATLAB simulation and in real time by conducting an experiment. Once the confidence of using the algorithm is increased, developed algorithm is coded into the Microcontroller based hardware and is validated in real time. Real time experimental results match well with those of the offline simulation results. However, there is only a small mismatch in distance and accuracy of the target detection, which is due to the limitations of the hardware used for the implementation.
本文介绍了基于视觉的军用自主机器人系统的设计、开发和验证。采用绝对差和(Sum of Absolute Difference, SAD)算法实现了所提出的图像处理算法。它的工作原理是图像减法。通过基于变化的运动目标检测方法对该算法进行了实时验证。这项工作的新颖之处在于将自主机器人应用于战场上的地雷探测。通过MATLAB仿真和实时实验验证了所开发算法的有效性。一旦提高了使用算法的可信度,就将所开发的算法编码到基于单片机的硬件中,并进行实时验证。实时实验结果与离线仿真结果吻合较好。然而,在目标检测的距离和精度上只有很小的不匹配,这是由于用于实现的硬件的限制。
{"title":"Vision Based Robotic System for Military Applications -- Design and Real Time Validation","authors":"Sandeep Bhat, M. Meenakshi","doi":"10.1109/ICSIP.2014.8","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.8","url":null,"abstract":"This paper presents the design, development and validation of vision based autonomous robotic system for military applications. Sum of Absolute Difference (SAD) algorithm is used for the implementation of the proposed image processing algorithm. It works on the principle of image subtraction. The developed algorithm is validated in real time by change-based moving object detection method. The novelty of this work is the application of the developed autonomous robot for the detection of mines in the war field. Developed algorithm is validated both in offline using MATLAB simulation and in real time by conducting an experiment. Once the confidence of using the algorithm is increased, developed algorithm is coded into the Microcontroller based hardware and is validated in real time. Real time experimental results match well with those of the offline simulation results. However, there is only a small mismatch in distance and accuracy of the target detection, which is due to the limitations of the hardware used for the implementation.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125194422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the recent past the images of various fields are being considered for processing for various purposes. In this paper we are proposing an algorithm for protecting the secret image whose confidentiality needs to be maintained, and also to authenticate the distributor who distributes that secret image to multiple users. The secret image will be fused with the fingerprint of the dealer for authentication purpose. Fusion of the finger print will be done by using image fusion technique to generate a single image consisting of the secret image as well as the finger print image of the dealer. The fused image will be divided in to number of shares based on the threshold secret sharing technique. This provides both confidentiality of the secret image and as well as the authentication of the dealer who has sent the image. The verification will be done during reconstruction of the secret image.
{"title":"A Novel Algorithm to Protect the Secret Image through Image Fusion and Verifying the Dealer and the Secret Image","authors":"P. Devaki, G. R. Rao","doi":"10.1109/ICSIP.2014.17","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.17","url":null,"abstract":"In the recent past the images of various fields are being considered for processing for various purposes. In this paper we are proposing an algorithm for protecting the secret image whose confidentiality needs to be maintained, and also to authenticate the distributor who distributes that secret image to multiple users. The secret image will be fused with the fingerprint of the dealer for authentication purpose. Fusion of the finger print will be done by using image fusion technique to generate a single image consisting of the secret image as well as the finger print image of the dealer. The fused image will be divided in to number of shares based on the threshold secret sharing technique. This provides both confidentiality of the secret image and as well as the authentication of the dealer who has sent the image. The verification will be done during reconstruction of the secret image.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125933326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-01-08DOI: 10.1142/S021946781450003X
S. Angadi, M. Kodabagi
Reliable extraction/segmentation of text lines, words and characters is one of the very important steps for development of automated systems for understanding the text in low resolution display board images. In this paper, a new approach for segmentation of text lines, words and characters from Kannada text in low resolution display board images is presented. The proposed method uses projection profile features and on pixel distribution statistics for segmentation of text lines. The method also detects text lines containing consonant modifiers and merges them with corresponding text lines, and efficiently separates overlapped text lines as well. The character extraction process computes character boundaries using vertical profile features for extracting character images from every text line. Further, the word segmentation process uses k-means clustering to group inter character gaps into character and word cluster spaces, which are used to compute thresholds for extracting words. The method also takes care of variations in character and word gaps. The proposed methodology is evaluated on a data set of 1008 low resolution images of display boards containing Kannada text captured from 2 mega pixel cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200. The method achieves text line segmentation accuracy of 97.17%, word segmentation accuracy of 97.54% and character extraction accuracy of 99.09%. The proposed method is tolerant to font variability, spacing variations between characters and words, absence of free segmentation path due to consonant and vowel modifiers, noise and other degradations. The experimentation with images containing overlapped text lines has given promising results.
{"title":"A Robust Segmentation Technique for Line, Word and Character Extraction from Kannada Text in Low Resolution Display Board Images","authors":"S. Angadi, M. Kodabagi","doi":"10.1142/S021946781450003X","DOIUrl":"https://doi.org/10.1142/S021946781450003X","url":null,"abstract":"Reliable extraction/segmentation of text lines, words and characters is one of the very important steps for development of automated systems for understanding the text in low resolution display board images. In this paper, a new approach for segmentation of text lines, words and characters from Kannada text in low resolution display board images is presented. The proposed method uses projection profile features and on pixel distribution statistics for segmentation of text lines. The method also detects text lines containing consonant modifiers and merges them with corresponding text lines, and efficiently separates overlapped text lines as well. The character extraction process computes character boundaries using vertical profile features for extracting character images from every text line. Further, the word segmentation process uses k-means clustering to group inter character gaps into character and word cluster spaces, which are used to compute thresholds for extracting words. The method also takes care of variations in character and word gaps. The proposed methodology is evaluated on a data set of 1008 low resolution images of display boards containing Kannada text captured from 2 mega pixel cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200. The method achieves text line segmentation accuracy of 97.17%, word segmentation accuracy of 97.54% and character extraction accuracy of 99.09%. The proposed method is tolerant to font variability, spacing variations between characters and words, absence of free segmentation path due to consonant and vowel modifiers, noise and other degradations. The experimentation with images containing overlapped text lines has given promising results.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116532668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic detection of human ovarian follicles has been of increasing interest in recent years and is a significant area of women's health. Improper development of ovarian follicles has been an important reason for infertility in women. Currently, detection of ovarian follicle is done through diagnostic imaging technique called ultrasonography. Follicles differ in shape and colour. Further, the camouflaging characteristic of ultrasound images and the presence of speckle noise make the follicle detection a challenging task. In this paper, a novel method for automatic recognition of follicles in ultrasound images is proposed. Discrete wavelet transform based k-means clustering is proposed. Discrete wavelet transform is preferred due to its superior spectral temporal resolution that helps in despeckling the ultrasound images. K-means clustering is used to segment the image into different anatomical structures to yield better segmentation. Structural Similarity (SSIM), False Acceptance Rate (FAR) and False Rejection Rate (FRR) are used to demonstrate the efficiency of the proposed method.
{"title":"Automatic Segmentation of Ovarian Follicle Using K-Means Clustering","authors":"K. V, M. Ramya","doi":"10.1109/ICSIP.2014.27","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.27","url":null,"abstract":"Automatic detection of human ovarian follicles has been of increasing interest in recent years and is a significant area of women's health. Improper development of ovarian follicles has been an important reason for infertility in women. Currently, detection of ovarian follicle is done through diagnostic imaging technique called ultrasonography. Follicles differ in shape and colour. Further, the camouflaging characteristic of ultrasound images and the presence of speckle noise make the follicle detection a challenging task. In this paper, a novel method for automatic recognition of follicles in ultrasound images is proposed. Discrete wavelet transform based k-means clustering is proposed. Discrete wavelet transform is preferred due to its superior spectral temporal resolution that helps in despeckling the ultrasound images. K-means clustering is used to segment the image into different anatomical structures to yield better segmentation. Structural Similarity (SSIM), False Acceptance Rate (FAR) and False Rejection Rate (FRR) are used to demonstrate the efficiency of the proposed method.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122819587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. S. Venugopala, H. Sarojadevi, N. Chiplunkar, Vani Bhat
Digital video is one of the popular multimedia data exchanged in the internet. Due to its perfectly replicable nature many illegal copies of the original video can be made. Methods are needed to protect copyrights of the owner and prevent illegal copying. A video can also undergo several intentional attacks like frame dropping, averaging, cropping and median filtering and unintentional attacks like addition of noise and compression which can compromise copyright information, thereby denying the authentication. In this paper, the design and implementation of scene based watermarking where extraction will be a blind method, is proposed. The developed method embeds 8 bit-plane images, obtained from single gray scale watermark image, into different scenes of a video sequence. In this algorithm, some of the luminous values in the video pictures are selected and divided into groups, and the watermark bits are embedded by adjusting the relative relationship of the member in each group. A sufficient number of watermark bits will be embedded into the video pictures without causing noticeable distortion. The watermark will be correctly retrieved at the extraction stage, even after various types of video manipulation and other signal processing attacks.
{"title":"Video Watermarking by Adjusting the Pixel Values and Using Scene Change Detection","authors":"P. S. Venugopala, H. Sarojadevi, N. Chiplunkar, Vani Bhat","doi":"10.1109/ICSIP.2014.47","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.47","url":null,"abstract":"Digital video is one of the popular multimedia data exchanged in the internet. Due to its perfectly replicable nature many illegal copies of the original video can be made. Methods are needed to protect copyrights of the owner and prevent illegal copying. A video can also undergo several intentional attacks like frame dropping, averaging, cropping and median filtering and unintentional attacks like addition of noise and compression which can compromise copyright information, thereby denying the authentication. In this paper, the design and implementation of scene based watermarking where extraction will be a blind method, is proposed. The developed method embeds 8 bit-plane images, obtained from single gray scale watermark image, into different scenes of a video sequence. In this algorithm, some of the luminous values in the video pictures are selected and divided into groups, and the watermark bits are embedded by adjusting the relative relationship of the member in each group. A sufficient number of watermark bits will be embedded into the video pictures without causing noticeable distortion. The watermark will be correctly retrieved at the extraction stage, even after various types of video manipulation and other signal processing attacks.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127013612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we have introduced an improved combined cryptography-error correction method, which is called "Crypto-Coding". Although in previous experiments, combined error correction and encryption functionality has been studied into one single step, the modification needed in the introduction of encryption block improves the performance of the system. The combined System's performances are evaluated on Land Mobile Satellite (LMS) Channel. The results are compared with the system using ideal encryption and decryption.
{"title":"Crypto-coding Technique for Land Mobile Satellite Channel","authors":"Rajashri Khanai, G. Kulkarni, Dattaprasad Torse","doi":"10.1109/ICSIP.2014.30","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.30","url":null,"abstract":"In this paper, we have introduced an improved combined cryptography-error correction method, which is called \"Crypto-Coding\". Although in previous experiments, combined error correction and encryption functionality has been studied into one single step, the modification needed in the introduction of encryption block improves the performance of the system. The combined System's performances are evaluated on Land Mobile Satellite (LMS) Channel. The results are compared with the system using ideal encryption and decryption.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an effective method for classification of power quality disturbances, employing wavelet transformation for disturbance identification and Modular artificial Neural Network(MANN) technique for accurate classification of these disturbances. Disturbances such as voltage sag, swell and harmonics which are typical in power system are simulated. Wavelet transform, which has the ability to analyze these power quality problems simultaneously in both time and frequency domain is used to extract features of the disturbances by decomposing the signal using multi resolution analysis. These features are used to detect and localize the disturbances. ANN, the powerful tool with parallel processing capability, is suitable to classify the disturbances. Modular neural network is employed in this paper for automatic classification of power quality disturbances. The proposed algorithm has been verified by simulating various PQ disturbances and results are analyzed using Math works MATLAB.
{"title":"Wavelet Based Signal Processing Technique for Classification of Power Quality Disturbances","authors":"M. Tuljapurkar, A. Dharme","doi":"10.1109/ICSIP.2014.59","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.59","url":null,"abstract":"This paper presents an effective method for classification of power quality disturbances, employing wavelet transformation for disturbance identification and Modular artificial Neural Network(MANN) technique for accurate classification of these disturbances. Disturbances such as voltage sag, swell and harmonics which are typical in power system are simulated. Wavelet transform, which has the ability to analyze these power quality problems simultaneously in both time and frequency domain is used to extract features of the disturbances by decomposing the signal using multi resolution analysis. These features are used to detect and localize the disturbances. ANN, the powerful tool with parallel processing capability, is suitable to classify the disturbances. Modular neural network is employed in this paper for automatic classification of power quality disturbances. The proposed algorithm has been verified by simulating various PQ disturbances and results are analyzed using Math works MATLAB.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115345495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Hitesh, Babu Student, Shreyas H R Student, K. Manikantan, S. Ramachandran
Face Recognition (FR) under varying lighting conditions and pose is very challenging. This paper proposes a novel approach for enhancing the performance of a FR system, employing a unique combination of Active Illumination Equalization (AIE), Image Sharpening (IS), Standard Deviation Filtering (SDF), Mirror Image Superposition (MIS) and Binary Particle Swarm Optimization (BPSO). AIE is used for removal of non-uniform illumination and MIS is used to neutralize pose variance. Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) are used for efficient feature extraction and BPSO-based feature selection algorithm is used to search the feature space for the optimal feature subset. Experimental results, obtained by applying the proposed algorithm on Color FERET, Pointing Head Pose and Extended Yale B face databases, show that the proposed system outperforms other FR systems.
人脸识别(FR)在不同的光照条件和姿态下是非常具有挑战性的。本文提出了一种新的增强FR系统性能的方法,该方法采用主动照明均衡(AIE)、图像锐化(IS)、标准差滤波(SDF)、镜像叠加(MIS)和二进制粒子群优化(BPSO)的独特组合。AIE用于去除非均匀光照,MIS用于中和姿态方差。利用离散小波变换(DWT)和离散余弦变换(DCT)进行有效的特征提取,利用基于bpso的特征选择算法在特征空间中搜索最优特征子集。在Color FERET、Pointing Head Pose和Extended Yale B人脸数据库上的实验结果表明,该算法优于其他人脸识别系统。
{"title":"Face Recognition Using Active Illumination Equalization and Mirror Image Superposition as Pre-processing Techniques","authors":"S. Hitesh, Babu Student, Shreyas H R Student, K. Manikantan, S. Ramachandran","doi":"10.1109/ICSIP.2014.21","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.21","url":null,"abstract":"Face Recognition (FR) under varying lighting conditions and pose is very challenging. This paper proposes a novel approach for enhancing the performance of a FR system, employing a unique combination of Active Illumination Equalization (AIE), Image Sharpening (IS), Standard Deviation Filtering (SDF), Mirror Image Superposition (MIS) and Binary Particle Swarm Optimization (BPSO). AIE is used for removal of non-uniform illumination and MIS is used to neutralize pose variance. Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT) are used for efficient feature extraction and BPSO-based feature selection algorithm is used to search the feature space for the optimal feature subset. Experimental results, obtained by applying the proposed algorithm on Color FERET, Pointing Head Pose and Extended Yale B face databases, show that the proposed system outperforms other FR systems.","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117048227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Compression of documents, images, audios and videos have been traditionally practiced to increase the efficiency of data storage and transfer. However, in order to process or carry out any analytical computations, decompression has become an unavoidable pre-requisite. In this research work, we have attempted to compute the entropy, which is an important document analytic directly from the compressed documents. We use Conventional Entropy Quantifier (CEQ) and Spatial Entropy Quantifiers (SEQ) for entropy computations [1]. The entropies obtained are useful in applications like establishing equivalence, word spotting and document retrieval. Experiments have been performed with all the data sets of [1], at character, word and line levels taking compressed documents in run-length compressed domain. The algorithms developed are computational and space efficient, and results obtained match 100% with the results reported in [1].
{"title":"Entropy Computations of Document Images in Run-Length Compressed Domain","authors":"P. Nagabhushan, M. Javed, B. Chaudhuri","doi":"10.1109/ICSIP.2014.51","DOIUrl":"https://doi.org/10.1109/ICSIP.2014.51","url":null,"abstract":"Compression of documents, images, audios and videos have been traditionally practiced to increase the efficiency of data storage and transfer. However, in order to process or carry out any analytical computations, decompression has become an unavoidable pre-requisite. In this research work, we have attempted to compute the entropy, which is an important document analytic directly from the compressed documents. We use Conventional Entropy Quantifier (CEQ) and Spatial Entropy Quantifiers (SEQ) for entropy computations [1]. The entropies obtained are useful in applications like establishing equivalence, word spotting and document retrieval. Experiments have been performed with all the data sets of [1], at character, word and line levels taking compressed documents in run-length compressed domain. The algorithms developed are computational and space efficient, and results obtained match 100% with the results reported in [1].","PeriodicalId":111591,"journal":{"name":"2014 Fifth International Conference on Signal and Image Processing","volume":"7 Suppl 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121865809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}