Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776209
J. V. Sagar, C. Bhagvati
The existence of stochastic resonance has been demonstrated in physical, biological and geological systems for boosting weak signals to make them detectable. Narrow regions, small features and low-contrast or subtle edges, in noisy images, correspond to such weak signals. In this paper, the occurrence and exploitation of stochastic resonance in the detection, extraction and analysis of such features is demonstrated both mathematically and empirically. The mathematical results are confirmed by simulation studies. Finally, results on medical ultrasound images demonstrate that several subtle features lost by the application of robust techniques such as mean shift filter are recovered by stochastic resonance. These results reconfirm the mathematical and simulation findings.
{"title":"Stochastic resonance aided robust techniques for segmentation of medical ultrasound images","authors":"J. V. Sagar, C. Bhagvati","doi":"10.1109/NCVPRIPG.2013.6776209","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776209","url":null,"abstract":"The existence of stochastic resonance has been demonstrated in physical, biological and geological systems for boosting weak signals to make them detectable. Narrow regions, small features and low-contrast or subtle edges, in noisy images, correspond to such weak signals. In this paper, the occurrence and exploitation of stochastic resonance in the detection, extraction and analysis of such features is demonstrated both mathematically and empirically. The mathematical results are confirmed by simulation studies. Finally, results on medical ultrasound images demonstrate that several subtle features lost by the application of robust techniques such as mean shift filter are recovered by stochastic resonance. These results reconfirm the mathematical and simulation findings.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115751374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776197
P. Shrivastava, Sukhendu Das
In case of detection and analysis of deformation in collision scenarios, using a method based on area of contact instead of a point of contact generates numerically stable impulse forces. Area of contact improves the stability of control algorithms, but it is often associated with high computational cost. In this paper, we alleviate this problem by proposing a novel algorithm for collision detection of a deformable mesh against rigid structures. We reuse the data structures maintained for elastic force computations in the FEM, for the purpose of collision detection. Parallel constructs on GPU using reduced model make the simulations interactive even for meshes with thousands of elements. Since we don't maintain any additional complex structure for keeping track of the deformable body at each iteration, we significantly reduce the usage of GPU memory bandwidth. Efficiency of our method is illustrated by reporting high culling efficiency on various tests.
{"title":"Fast area of contact computation for collision detection of a deformable object using FEM","authors":"P. Shrivastava, Sukhendu Das","doi":"10.1109/NCVPRIPG.2013.6776197","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776197","url":null,"abstract":"In case of detection and analysis of deformation in collision scenarios, using a method based on area of contact instead of a point of contact generates numerically stable impulse forces. Area of contact improves the stability of control algorithms, but it is often associated with high computational cost. In this paper, we alleviate this problem by proposing a novel algorithm for collision detection of a deformable mesh against rigid structures. We reuse the data structures maintained for elastic force computations in the FEM, for the purpose of collision detection. Parallel constructs on GPU using reduced model make the simulations interactive even for meshes with thousands of elements. Since we don't maintain any additional complex structure for keeping track of the deformable body at each iteration, we significantly reduce the usage of GPU memory bandwidth. Efficiency of our method is illustrated by reporting high culling efficiency on various tests.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776206
S. Ramakanth, R. Venkatesh Babu
Approximate Nearest-Neighbour Field has been an area of interest in recent research for a wide variety of topics in graphics and multimedia community. Medical image processing is a relatively unaffected field by these developments in ANNF computations, brought about by various extremely efficient algorithms like PatchMatch. In this paper, we use Generalized PatchMatch for Optic Disk detection, in retinal images, and show that by making use of efficient ANNF computations we are able to generate results with 98% accuracy with an average time of 0.5 sec. This is significantly faster than conventional Optic Disk detection methods, which average at 95-97% accuracy with 3-5 sec average computation time.
{"title":"OD-Match: PatchMatch based Optic Disk detection","authors":"S. Ramakanth, R. Venkatesh Babu","doi":"10.1109/NCVPRIPG.2013.6776206","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776206","url":null,"abstract":"Approximate Nearest-Neighbour Field has been an area of interest in recent research for a wide variety of topics in graphics and multimedia community. Medical image processing is a relatively unaffected field by these developments in ANNF computations, brought about by various extremely efficient algorithms like PatchMatch. In this paper, we use Generalized PatchMatch for Optic Disk detection, in retinal images, and show that by making use of efficient ANNF computations we are able to generate results with 98% accuracy with an average time of 0.5 sec. This is significantly faster than conventional Optic Disk detection methods, which average at 95-97% accuracy with 3-5 sec average computation time.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126763682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776266
Adersh Miglani, Sumantra Dutta Roy, S. Chaudhury, J. B. Srivastava
First, we describe how 360°-rotational symmetry may be used for three dimensional reconstruction of repeated cylinders from a single perspective image. In our experiments, we consider translational and affine repetition of cylinders with vertical and random orientations. Later, we create a virtual camera configuration for retrieving pose and location of repeated cylinders. The combination of 360°-rotational symmetry and camera center is used to identify two orthogonal planes called axis plane and orthogonal axis plane. These two planes are the basis for the proposed reconstruction framework and virtual camera configuration. Furthermore, we discuss possible extension of our method in vision tasks based on motion analysis.
{"title":"Symmetry based 3D reconstruction of repeated cylinders","authors":"Adersh Miglani, Sumantra Dutta Roy, S. Chaudhury, J. B. Srivastava","doi":"10.1109/NCVPRIPG.2013.6776266","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776266","url":null,"abstract":"First, we describe how 360°-rotational symmetry may be used for three dimensional reconstruction of repeated cylinders from a single perspective image. In our experiments, we consider translational and affine repetition of cylinders with vertical and random orientations. Later, we create a virtual camera configuration for retrieving pose and location of repeated cylinders. The combination of 360°-rotational symmetry and camera center is used to identify two orthogonal planes called axis plane and orthogonal axis plane. These two planes are the basis for the proposed reconstruction framework and virtual camera configuration. Furthermore, we discuss possible extension of our method in vision tasks based on motion analysis.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121815271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776177
G. Bhatnagar, Q.M. Jonathan Wu
One of the foremost requisite for human perception and computer vision task is to get an image with all objects in focus. The image fusion process, as one of the solutions, allows getting a clear fused image from several images acquired with different focus levels of a scene. In this paper, a novel framework for multi-focus image fusion is proposed, which is computationally simple since it realizes only in the spatial domain. The proposed framework is based on the fractal dimensions of the images into the fusion process. The extensive experiments on different multi-focus image sets demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
{"title":"A novel framework for multi-focus image fusion","authors":"G. Bhatnagar, Q.M. Jonathan Wu","doi":"10.1109/NCVPRIPG.2013.6776177","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776177","url":null,"abstract":"One of the foremost requisite for human perception and computer vision task is to get an image with all objects in focus. The image fusion process, as one of the solutions, allows getting a clear fused image from several images acquired with different focus levels of a scene. In this paper, a novel framework for multi-focus image fusion is proposed, which is computationally simple since it realizes only in the spatial domain. The proposed framework is based on the fractal dimensions of the images into the fusion process. The extensive experiments on different multi-focus image sets demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117094666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776252
R. Sandeep, P. Bora
A perceptual video hashing function maps the perceptual content of a video into a fixed-length binary string called the perceptual hash. Perceptual hashing is a promising solution to the content-identification and the content-authentication problems. The projections of image and video data onto a subspace have been exploited in the literature to get a compact hash function. We propose a new perceptual video hashing algorithm based on the Achlioptas's random projections. Simulation results show that the proposed perceptual hash function is robust to common signal and image processing attacks.
{"title":"Perceptual video hashing based on the Achlioptas's random projections","authors":"R. Sandeep, P. Bora","doi":"10.1109/NCVPRIPG.2013.6776252","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776252","url":null,"abstract":"A perceptual video hashing function maps the perceptual content of a video into a fixed-length binary string called the perceptual hash. Perceptual hashing is a promising solution to the content-identification and the content-authentication problems. The projections of image and video data onto a subspace have been exploited in the literature to get a compact hash function. We propose a new perceptual video hashing algorithm based on the Achlioptas's random projections. Simulation results show that the proposed perceptual hash function is robust to common signal and image processing attacks.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124769019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776253
S. Biswas, K. Shafique
In this paper, we address the problem of separating the diffuse and specular reflection components of complex textured surfaces from a single color image. Unlike most previous approaches that assume accurate knowledge of illumination source color for this task, we analyze errors in source color information to perform robust separation. The analysis leads to a simple, efficient and robust algorithm to estimate the diffuse and specular components using the estimated source color. The algorithm is completely automatic and does not need explicit color segmentation or color boundary detection as required by many existing methods. Results on complex textured images show the effectiveness of the proposed algorithm for robust reflection component separation.
{"title":"Source color error analysis for robust separation of reflection components","authors":"S. Biswas, K. Shafique","doi":"10.1109/NCVPRIPG.2013.6776253","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776253","url":null,"abstract":"In this paper, we address the problem of separating the diffuse and specular reflection components of complex textured surfaces from a single color image. Unlike most previous approaches that assume accurate knowledge of illumination source color for this task, we analyze errors in source color information to perform robust separation. The analysis leads to a simple, efficient and robust algorithm to estimate the diffuse and specular components using the estimated source color. The algorithm is completely automatic and does not need explicit color segmentation or color boundary detection as required by many existing methods. Results on complex textured images show the effectiveness of the proposed algorithm for robust reflection component separation.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121193460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776164
Sovan Biswas, R. Venkatesh Babu
Real time anomaly detection is the need of the hour for any security applications. In this paper, we have proposed a real-time anomaly detection algorithm by utilizing cues from the motion vectors in H.264/AVC compressed domain. The discussed work is principally motivated by the observation that motion vectors (MVs) exhibit different characteristics during anomaly. We have observed that H.264 motion vector magnitude contains relevant information which can be used to model the usual behavior (UB) effectively. This is subsequently extended to detect abnormality/anomaly based on the probability of occurrence of a behavior. Additionally, we have suggested a hierarchical approach through Motion Pyramid for High Resolution videos to further increase the detection rate. The proposed algorithm has performed extremely well on UMN and Peds Anomaly Detection Video datasets, with a detection speed of >150 and 65-75 frames per sec in respective datasets resulting in more than 200× speedup along with comparable accuracy to pixel domain state-of-the-art algorithms.
{"title":"Real time anomaly detection in H.264 compressed videos","authors":"Sovan Biswas, R. Venkatesh Babu","doi":"10.1109/NCVPRIPG.2013.6776164","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776164","url":null,"abstract":"Real time anomaly detection is the need of the hour for any security applications. In this paper, we have proposed a real-time anomaly detection algorithm by utilizing cues from the motion vectors in H.264/AVC compressed domain. The discussed work is principally motivated by the observation that motion vectors (MVs) exhibit different characteristics during anomaly. We have observed that H.264 motion vector magnitude contains relevant information which can be used to model the usual behavior (UB) effectively. This is subsequently extended to detect abnormality/anomaly based on the probability of occurrence of a behavior. Additionally, we have suggested a hierarchical approach through Motion Pyramid for High Resolution videos to further increase the detection rate. The proposed algorithm has performed extremely well on UMN and Peds Anomaly Detection Video datasets, with a detection speed of >150 and 65-75 frames per sec in respective datasets resulting in more than 200× speedup along with comparable accuracy to pixel domain state-of-the-art algorithms.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126793925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776180
P. Saikrishna, A. Ramakrishnan
A script independent, font-size independent scheme is proposed for detecting bold words in printed pages. In OCR applications such as minor modifications of an existing printed form, it is desirable to reproduce the font size and characteristics such as bold, and italics in the OCR recognized document. In this morphological opening based detection of bold (MOBDoB) method, the binarized image is segmented into sub-images with uniform font sizes, using the word height information. Rough estimation of the stroke widths of characters in each sub-image is obtained from the density. Each sub-image is then opened with a square structuring element of size determined by the respective stroke width. The union of all the opened sub-images is used to determine the locations of the bold words. Extracting all such words from the binarized image gives the final image. A minimum of 98 % of bold words were detected from a total of 65 Tamil, Kannada and English pages and the false alarm rate is less than 0.4 %.
{"title":"Script independent detection of bold words in multi font-size documents","authors":"P. Saikrishna, A. Ramakrishnan","doi":"10.1109/NCVPRIPG.2013.6776180","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776180","url":null,"abstract":"A script independent, font-size independent scheme is proposed for detecting bold words in printed pages. In OCR applications such as minor modifications of an existing printed form, it is desirable to reproduce the font size and characteristics such as bold, and italics in the OCR recognized document. In this morphological opening based detection of bold (MOBDoB) method, the binarized image is segmented into sub-images with uniform font sizes, using the word height information. Rough estimation of the stroke widths of characters in each sub-image is obtained from the density. Each sub-image is then opened with a square structuring element of size determined by the respective stroke width. The union of all the opened sub-images is used to determine the locations of the bold words. Extracting all such words from the binarized image gives the final image. A minimum of 98 % of bold words were detected from a total of 65 Tamil, Kannada and English pages and the false alarm rate is less than 0.4 %.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125984775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776225
S. Setty, M. Husain, Parisa Beham, Jyothi Gudavalli, Menaka Kandasamy, R. Vaddi, V. Hemadri, J C Karure, Raja Raju, B. Rajan, Vijay Kumar, C V Jawahar
Recognizing human faces in the wild is emerging as a critically important, and technically challenging computer vision problem. With a few notable exceptions, most previous works in the last several decades have focused on recognizing faces captured in a laboratory setting. However, with the introduction of databases such as LFW and Pubfigs, face recognition community is gradually shifting its focus on much more challenging unconstrained settings. Since its introduction, LFW verification benchmark is getting a lot of attention with various researchers contributing towards state-of-the-results. To further boost the unconstrained face recognition research, we introduce a more challenging Indian Movie Face Database (IMFDB) that has much more variability compared to LFW and Pubfigs. The database consists of 34512 faces of 100 known actors collected from approximately 103 Indian movies. Unlike LFW and Pubfigs which used face detectors to automatically detect the faces from the web collection, faces in IMFDB are detected manually from all the movies. Manual selection of faces from movies resulted in high degree of variability (in scale, pose, expression, illumination, age, occlusion, makeup) which one could ever see in natural world. IMFDB is the first face database that provides a detailed annotation in terms of age, pose, gender, expression, amount of occlusion, for each face which may help other face related applications.
{"title":"Indian Movie Face Database: A benchmark for face recognition under wide variations","authors":"S. Setty, M. Husain, Parisa Beham, Jyothi Gudavalli, Menaka Kandasamy, R. Vaddi, V. Hemadri, J C Karure, Raja Raju, B. Rajan, Vijay Kumar, C V Jawahar","doi":"10.1109/NCVPRIPG.2013.6776225","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776225","url":null,"abstract":"Recognizing human faces in the wild is emerging as a critically important, and technically challenging computer vision problem. With a few notable exceptions, most previous works in the last several decades have focused on recognizing faces captured in a laboratory setting. However, with the introduction of databases such as LFW and Pubfigs, face recognition community is gradually shifting its focus on much more challenging unconstrained settings. Since its introduction, LFW verification benchmark is getting a lot of attention with various researchers contributing towards state-of-the-results. To further boost the unconstrained face recognition research, we introduce a more challenging Indian Movie Face Database (IMFDB) that has much more variability compared to LFW and Pubfigs. The database consists of 34512 faces of 100 known actors collected from approximately 103 Indian movies. Unlike LFW and Pubfigs which used face detectors to automatically detect the faces from the web collection, faces in IMFDB are detected manually from all the movies. Manual selection of faces from movies resulted in high degree of variability (in scale, pose, expression, illumination, age, occlusion, makeup) which one could ever see in natural world. IMFDB is the first face database that provides a detailed annotation in terms of age, pose, gender, expression, amount of occlusion, for each face which may help other face related applications.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130751789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}