Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776237
Fidalizia Pyrtuh, Sarfaraz Jelil, Geetima Kachari, L. J. Singh
This paper presents a comparative study of the normalization techniques used at feature level in voice password based speaker verification system. The input sample speech is recorded at different instants of time and environment. Hence, there is a variation in the input sample due to the environmental interference, noise, emotions etc. The input sample is a human voice with unique passwords taken/recorded at three different instants of time or day. This input sample is processed using sampling, pre-emphasis, MFCC feature extraction and DTW. In order to enhance the features we have used three different popular feature normalization techniques namely MVN (Mean and Variance Normalization), CMN (Cepstral Mean Normalization) and PCA(Principal Component Analysis) and analyzed the result of each technique individually. The objective of this paper is to compare the performance and efficiency of these techniques and evaluate which of these gives the best verification rate. According to our findings CMN gives the best results.
{"title":"Comparative evaluation of feature normalization techniques for voice password based speaker verification","authors":"Fidalizia Pyrtuh, Sarfaraz Jelil, Geetima Kachari, L. J. Singh","doi":"10.1109/NCVPRIPG.2013.6776237","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776237","url":null,"abstract":"This paper presents a comparative study of the normalization techniques used at feature level in voice password based speaker verification system. The input sample speech is recorded at different instants of time and environment. Hence, there is a variation in the input sample due to the environmental interference, noise, emotions etc. The input sample is a human voice with unique passwords taken/recorded at three different instants of time or day. This input sample is processed using sampling, pre-emphasis, MFCC feature extraction and DTW. In order to enhance the features we have used three different popular feature normalization techniques namely MVN (Mean and Variance Normalization), CMN (Cepstral Mean Normalization) and PCA(Principal Component Analysis) and analyzed the result of each technique individually. The objective of this paper is to compare the performance and efficiency of these techniques and evaluate which of these gives the best verification rate. According to our findings CMN gives the best results.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"78 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123268036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776161
R. Tripathi, A. S. Jalal, C. Bhatnagar
In this paper, we propose a method to detect abandoned object from surveillance video. In first step, foreground objects are extracted using background subtraction in which background modeling is done through running average method. In second step, static objects are detected by using contour features of foreground objects of consecutive frames. In third step, detected static objects are classified into human and non-human objects by using edge based object recognition method which is capable to generate the score for full or partial visible object. Nonhuman static object is analyzed to detect abandoned object. Experimental results show that proposed system is efficient and effective for real-time video surveillance, which is tested on IEEE Performance Evaluation of Tracking and Surveillance data set (PETS 2006, PETS 2007) and our own dataset.
{"title":"A framework for abandoned object detection from video surveillance","authors":"R. Tripathi, A. S. Jalal, C. Bhatnagar","doi":"10.1109/NCVPRIPG.2013.6776161","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776161","url":null,"abstract":"In this paper, we propose a method to detect abandoned object from surveillance video. In first step, foreground objects are extracted using background subtraction in which background modeling is done through running average method. In second step, static objects are detected by using contour features of foreground objects of consecutive frames. In third step, detected static objects are classified into human and non-human objects by using edge based object recognition method which is capable to generate the score for full or partial visible object. Nonhuman static object is analyzed to detect abandoned object. Experimental results show that proposed system is efficient and effective for real-time video surveillance, which is tested on IEEE Performance Evaluation of Tracking and Surveillance data set (PETS 2006, PETS 2007) and our own dataset.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122798771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776169
Nitish Tripathi, P J Narayanan
We present an approach to simulate both Newtonian and generalized Newtonian fluids using Lattice Boltzmann Method. The focus has been on accurately modelling non-Newtonian fluids at the micro channel level from biological fluids in the past. Our method can model macroscopic behaviour of such fluids by simulating the variation of properties such as viscosity through the bulk of the fluid. The method works regardless of the magnitude of flow, be it through a thin tube or a large quantity of liquid splashing in a container. We simulate the change in viscosity of a generalized Newtonian fluid and its free surface interactions with obstacles and boundaries. We harness the inherent parallelism of Lattice Boltzmann Method to give a fast GPU implementation for the same.
{"title":"Generalized newtonian fluid simulations","authors":"Nitish Tripathi, P J Narayanan","doi":"10.1109/NCVPRIPG.2013.6776169","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776169","url":null,"abstract":"We present an approach to simulate both Newtonian and generalized Newtonian fluids using Lattice Boltzmann Method. The focus has been on accurately modelling non-Newtonian fluids at the micro channel level from biological fluids in the past. Our method can model macroscopic behaviour of such fluids by simulating the variation of properties such as viscosity through the bulk of the fluid. The method works regardless of the magnitude of flow, be it through a thin tube or a large quantity of liquid splashing in a container. We simulate the change in viscosity of a generalized Newtonian fluid and its free surface interactions with obstacles and boundaries. We harness the inherent parallelism of Lattice Boltzmann Method to give a fast GPU implementation for the same.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125273576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776238
Nidhi Chahal, S. Chaudhury
High quality depth map estimation is required for better visualization of 3D views as there is great impact of depth map quality on overall 3D image quality. If the depth is estimated from conventional ways using two or more images, some defects come into picture, mostly in regions without texture. We utilised Microsoft Kinect RGBD dataset to obtain input color images and depth maps which also includes some noise factors. We proposed a method to remove this noise and get quality depth images. First the color and depth images are aligned to each other using intensity based image registration. This method of image alignment is mostly used in medical field, but we applied this technique to correct kinect depth maps by which one can avoid cumbersome task of feature based point correspondence between images. There is no requirement of preprocessing or segmentation steps if we use intensity based image alignment method. Second, we proposed an algorithm to fill the unwanted gaps in kinect depth maps and upsampled it using corresponding high resolution color image. Finally we applied 9×9 median filtering on implementation results and get high quality and improved depth maps.
{"title":"High quality depth map estimation by kinect upsampling and hole filling using RGB features and mutual information","authors":"Nidhi Chahal, S. Chaudhury","doi":"10.1109/NCVPRIPG.2013.6776238","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776238","url":null,"abstract":"High quality depth map estimation is required for better visualization of 3D views as there is great impact of depth map quality on overall 3D image quality. If the depth is estimated from conventional ways using two or more images, some defects come into picture, mostly in regions without texture. We utilised Microsoft Kinect RGBD dataset to obtain input color images and depth maps which also includes some noise factors. We proposed a method to remove this noise and get quality depth images. First the color and depth images are aligned to each other using intensity based image registration. This method of image alignment is mostly used in medical field, but we applied this technique to correct kinect depth maps by which one can avoid cumbersome task of feature based point correspondence between images. There is no requirement of preprocessing or segmentation steps if we use intensity based image alignment method. Second, we proposed an algorithm to fill the unwanted gaps in kinect depth maps and upsampled it using corresponding high resolution color image. Finally we applied 9×9 median filtering on implementation results and get high quality and improved depth maps.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121575331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776229
Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar
People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.
患有自闭症的人很难识别他人的情绪,因此无法对他人的情绪做出反应。尽管有人试图开发一种系统来分析自闭症患者的面部表情,但很少有人探索从混合表情中捕捉一种或多种表情,混合表情是两种密切相关的表情的混合物。这是必不可少的心理治疗工具,分析在咨询过程中。本文提出了从两种面部表情的混合中提高快乐、惊讶、恐惧、厌恶、悲伤和愤怒六种原型表情中的一种或多种识别精度的想法。为此,在给定视频序列的帧之间计算基于运动梯度的肌肉运动光流。利用计算得到的光流生成特征向量作为6个基本原型表达式的签名。使用决策树生成的规则库对视频序列中获得的特征向量进行聚类,并将聚类结果用于表情识别。在一帧中测量给定面部表情的相对强度。随着基于分量的分析(Component Based Analysis)的引入,在人脸感兴趣的区域上计算特征向量,在识别一个或多个表情方面已经有了相当大的改进。这些结果已经与人类的判断相违背。
{"title":"Assisting the autistic with improved facial expression recognition from mixed expressions","authors":"Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar","doi":"10.1109/NCVPRIPG.2013.6776229","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776229","url":null,"abstract":"People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122255576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776240
Vaibhav B. Joshi, M. Raval, S. Mitra, P. Rege, S. K. Parulkar
For every biometric template protecting technique, non-reversibility, accuracy, and revocability are essential features. Several template protecting techniques like bio-hash or biometric crypto system are used to transform raw biometric features into alternative form known as protected template [2]. As the protected templates are non-reversible, biometric verification is done in a transform domain. Tampered or stolen protected template may cause false validation; therefore its authentication in the database is essential. Reversible watermarking technique provides one such effective mechanism. Watermark protected templates are stored in the database at the time of its enrollment. During verification phase, incoming query template is compared with many database templates until a match is established. This verification technique increase complexity and burden on a biometric authentication system. In this paper, we propose a tag based template searching in reversible watermarking technique to check authenticity and reduce burden on biometric authentication system. In the proposal, rotation, scale and translation (RST) invariant features of biometric image are used for tagging the data. Watermark reversibility in the proposed method ensures that its presence do not affect native biometric authentication. Moreover presence of watermark in the biometric template provides security against replay attack.
{"title":"Reversible watermarking technique to enhance security of a biometric authentication system","authors":"Vaibhav B. Joshi, M. Raval, S. Mitra, P. Rege, S. K. Parulkar","doi":"10.1109/NCVPRIPG.2013.6776240","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776240","url":null,"abstract":"For every biometric template protecting technique, non-reversibility, accuracy, and revocability are essential features. Several template protecting techniques like bio-hash or biometric crypto system are used to transform raw biometric features into alternative form known as protected template [2]. As the protected templates are non-reversible, biometric verification is done in a transform domain. Tampered or stolen protected template may cause false validation; therefore its authentication in the database is essential. Reversible watermarking technique provides one such effective mechanism. Watermark protected templates are stored in the database at the time of its enrollment. During verification phase, incoming query template is compared with many database templates until a match is established. This verification technique increase complexity and burden on a biometric authentication system. In this paper, we propose a tag based template searching in reversible watermarking technique to check authenticity and reduce burden on biometric authentication system. In the proposal, rotation, scale and translation (RST) invariant features of biometric image are used for tagging the data. Watermark reversibility in the proposed method ensures that its presence do not affect native biometric authentication. Moreover presence of watermark in the biometric template provides security against replay attack.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125575546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776261
Soumyadip Sengupta, Udit Halder, R. Panda, A. S. Chowdhury
In this paper, we propose a frequency domain based model-free gait recognition approach from silhouette inputs using Fourier Transform. Gait sequences are first converted into frequency domain using Fourier transform. Information content of the frequency components are analysed next to determine the number of effective frequencies which can help in the recognition process. These principal frequencies are treated separately to obtain scores based on the correlation coefficient between the gallery and the probe images. The individual scores are fused in the last stage to obtain the final score. The proposed approach is compared with other state-of-the-art model-free gait recognition algorithms. Experimental results on the USF HumanID database clearly indicate the supremacy of our technique.
{"title":"A frequency domain approach to silhouette based gait recognition","authors":"Soumyadip Sengupta, Udit Halder, R. Panda, A. S. Chowdhury","doi":"10.1109/NCVPRIPG.2013.6776261","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776261","url":null,"abstract":"In this paper, we propose a frequency domain based model-free gait recognition approach from silhouette inputs using Fourier Transform. Gait sequences are first converted into frequency domain using Fourier transform. Information content of the frequency components are analysed next to determine the number of effective frequencies which can help in the recognition process. These principal frequencies are treated separately to obtain scores based on the correlation coefficient between the gallery and the probe images. The individual scores are fused in the last stage to obtain the final score. The proposed approach is compared with other state-of-the-art model-free gait recognition algorithms. Experimental results on the USF HumanID database clearly indicate the supremacy of our technique.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124510584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776259
Ashish Phophalia, S. Mitra, Ajit Rajwade
A Rough Set Theory based closed form object boundary detection method has been suggested in this paper. Most of the edge detection methods fail in getting closed boundary of objects of any shape present in the image. Active contour based methods are available to get such object boundaries. The Multiphase Chan-Vese Active Contour Method is one of the most popular of such techniques. However, it is constrained with number of objects present in the image. The granular processing using Rough Set method overcomes this constraint and provides a closed curve around the boundary of the objects. This information can further be utilized in selection of similar patches for various image processing problems such as Image Denoising, Image Super-resolution, Image Segmentation etc. The proposed boundary detection method has been tested in presence of noise also. The experimental results have shown on synthetic image as well as on MRI of human brain. The performance of proposed method is found to be encouraging.
{"title":"Object boundary detection using Rough Set Theory","authors":"Ashish Phophalia, S. Mitra, Ajit Rajwade","doi":"10.1109/NCVPRIPG.2013.6776259","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776259","url":null,"abstract":"A Rough Set Theory based closed form object boundary detection method has been suggested in this paper. Most of the edge detection methods fail in getting closed boundary of objects of any shape present in the image. Active contour based methods are available to get such object boundaries. The Multiphase Chan-Vese Active Contour Method is one of the most popular of such techniques. However, it is constrained with number of objects present in the image. The granular processing using Rough Set method overcomes this constraint and provides a closed curve around the boundary of the objects. This information can further be utilized in selection of similar patches for various image processing problems such as Image Denoising, Image Super-resolution, Image Segmentation etc. The proposed boundary detection method has been tested in presence of noise also. The experimental results have shown on synthetic image as well as on MRI of human brain. The performance of proposed method is found to be encouraging.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133497387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776203
Dibyayan Chakraborty, P. Roy, J. Álvarez, U. Pal
Book flipping scanning refers to the process of recording a book while the user performs the flipping action of its pages. In recent years it has gained much attention as it reduces the workload of book digitization significantly. It is a challenging task because flipping at random speed and direction causes difficulties to identify distinct open page images (OPI) which represent each page of the book. In this paper, we propose a fast technique for removing duplicate open pages introduced in the video stream due to erroneous flipping. We present an algorithm that exploits cues from edge information of flipping pages. The nature of the cues extracted from the region of interest (ROI) of the frame, determines the flipping or an open state of a page whereas temporal position a flipping page determines the direction of the flipping. Combining these information we decide whether an open page image is a duplicate or not. Experiments are performed on video documents recorded using a standard resolution camera to validate the duplicate open page removal algorithm and we have obtained 95% accuracy.
{"title":"Duplicate open page removal from video stream of book flipping","authors":"Dibyayan Chakraborty, P. Roy, J. Álvarez, U. Pal","doi":"10.1109/NCVPRIPG.2013.6776203","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776203","url":null,"abstract":"Book flipping scanning refers to the process of recording a book while the user performs the flipping action of its pages. In recent years it has gained much attention as it reduces the workload of book digitization significantly. It is a challenging task because flipping at random speed and direction causes difficulties to identify distinct open page images (OPI) which represent each page of the book. In this paper, we propose a fast technique for removing duplicate open pages introduced in the video stream due to erroneous flipping. We present an algorithm that exploits cues from edge information of flipping pages. The nature of the cues extracted from the region of interest (ROI) of the frame, determines the flipping or an open state of a page whereas temporal position a flipping page determines the direction of the flipping. Combining these information we decide whether an open page image is a duplicate or not. Experiments are performed on video documents recorded using a standard resolution camera to validate the duplicate open page removal algorithm and we have obtained 95% accuracy.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129960617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776211
K. Uma, C. Kesavadas, J. S. Paul
Scan time reduction in MRI can be achieved by partial k-space reconstruction. Truncation of the k-space results in generation of artifacts in the reconstructed image. A subspace projection algorithm is developed for artifact-free reconstruction of sparse MRI. The algorithm is applied to a frequency weighted k-space, which fits into a signal-space model for sparse MR images. The application is illustrated using Magnetic Resonance Angiogram (MRA).
{"title":"Partial Fourier reconstruction using subspace projection","authors":"K. Uma, C. Kesavadas, J. S. Paul","doi":"10.1109/NCVPRIPG.2013.6776211","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776211","url":null,"abstract":"Scan time reduction in MRI can be achieved by partial k-space reconstruction. Truncation of the k-space results in generation of artifacts in the reconstructed image. A subspace projection algorithm is developed for artifact-free reconstruction of sparse MRI. The algorithm is applied to a frequency weighted k-space, which fits into a signal-space model for sparse MR images. The application is illustrated using Magnetic Resonance Angiogram (MRA).","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130299051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}