Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776218
Deepika Shukla, R. K. Jha
This paper presents an optimized and efficient video stabilization technique based on projection curve warping. In most of the recorded videos, the relative displacement between two consecutive frames goes from 3-4 pixel for hand-held and 25-30 for moving platform applications. Based on this experimental data, the use of Sakoe-Chiba band with fixed window size has been proposed for constraining distance matrix estimation, in the dynamic time warping algorithm. In the existing projection based stabilization techniques, intensity values are matched for motion estimation. Any change in the local intensity values either induced due to intensity variation, moving objects or scene variation, causes error in the estimated motion. To overcome this problem, a higher level feature i.e. shape of the projection curve has been incorporated by matching the local derivative of curve instead of the intensity values itself. Robustness and time efficiency of the proposed technique is measured in terms of interframe transformation fidelity and processing time respectively.
{"title":"An optimized derivative projection warping approach for moving platform video stabilization","authors":"Deepika Shukla, R. K. Jha","doi":"10.1109/NCVPRIPG.2013.6776218","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776218","url":null,"abstract":"This paper presents an optimized and efficient video stabilization technique based on projection curve warping. In most of the recorded videos, the relative displacement between two consecutive frames goes from 3-4 pixel for hand-held and 25-30 for moving platform applications. Based on this experimental data, the use of Sakoe-Chiba band with fixed window size has been proposed for constraining distance matrix estimation, in the dynamic time warping algorithm. In the existing projection based stabilization techniques, intensity values are matched for motion estimation. Any change in the local intensity values either induced due to intensity variation, moving objects or scene variation, causes error in the estimated motion. To overcome this problem, a higher level feature i.e. shape of the projection curve has been incorporated by matching the local derivative of curve instead of the intensity values itself. Robustness and time efficiency of the proposed technique is measured in terms of interframe transformation fidelity and processing time respectively.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125182837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776217
Saroj Hatheele, M. Zaveri
In this paper, we propose a novel technique of tracking based video inpainting using depth information. Depth information obtained from the structure of motion is refined by extended proposed voting based algorithm. The refined depth map is used to extract moving foreground object from tracked moving object then replaces it into other video frame using integrated color and depth information based video inpainting. We compared the color based video inpainting with integrated color and depth information based video inpainting. Our proposed method acquaints special effect by including tracking and depth information to video inpainting. Inclusion of depth information increases the quality of inpainted video. Finally, we present experimental results of depth refinement and video inpainting for molecular video sequences captured with static camera with moving objects.
{"title":"Tracking based depth-guided video inpainting","authors":"Saroj Hatheele, M. Zaveri","doi":"10.1109/NCVPRIPG.2013.6776217","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776217","url":null,"abstract":"In this paper, we propose a novel technique of tracking based video inpainting using depth information. Depth information obtained from the structure of motion is refined by extended proposed voting based algorithm. The refined depth map is used to extract moving foreground object from tracked moving object then replaces it into other video frame using integrated color and depth information based video inpainting. We compared the color based video inpainting with integrated color and depth information based video inpainting. Our proposed method acquaints special effect by including tracking and depth information to video inpainting. Inclusion of depth information increases the quality of inpainted video. Finally, we present experimental results of depth refinement and video inpainting for molecular video sequences captured with static camera with moving objects.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122794635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776166
Hardik Acharya, Amitabh, T. Srinivasan, B. Gopalakrishna
Chandrayaan-1, India's first moon mission was launched by ISRO in October 2008. SAC (Space Applications centre) is responsible for development of software for processing data from HySI (Hyper Spectral Imager) and TMC (Terrain Mapping Camera). The present work discusses the technique and methodology for generating terrain parameters i.e. slope, aspect, relief-shade, contour etc. using Digital Elevation Model (DEM) generated from Chandrayaan-1 TMC datasets. In this paper, an algorithm and corresponding desktop application software has been developed and implemented. Preliminary testing of application using Chandrayaan-1 DEM data indicate promising results. Environment creation for execution of the code using open source technology is the challenging task, as it includes the building of open source libraries with visual studio. This paper describes the Slope, Aspect, Relief-Shade, Painted slope, Painted aspect and Painted DEM generation method and discusses the results achieved for the good evaluation of terrain.
{"title":"A new approach for terrain analysis of lunar surface by Chandrayaan-1 data using open source libraries","authors":"Hardik Acharya, Amitabh, T. Srinivasan, B. Gopalakrishna","doi":"10.1109/NCVPRIPG.2013.6776166","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776166","url":null,"abstract":"Chandrayaan-1, India's first moon mission was launched by ISRO in October 2008. SAC (Space Applications centre) is responsible for development of software for processing data from HySI (Hyper Spectral Imager) and TMC (Terrain Mapping Camera). The present work discusses the technique and methodology for generating terrain parameters i.e. slope, aspect, relief-shade, contour etc. using Digital Elevation Model (DEM) generated from Chandrayaan-1 TMC datasets. In this paper, an algorithm and corresponding desktop application software has been developed and implemented. Preliminary testing of application using Chandrayaan-1 DEM data indicate promising results. Environment creation for execution of the code using open source technology is the challenging task, as it includes the building of open source libraries with visual studio. This paper describes the Slope, Aspect, Relief-Shade, Painted slope, Painted aspect and Painted DEM generation method and discusses the results achieved for the good evaluation of terrain.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133481683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776192
A. Minocha, Digvijay Singh, Nataraj Jammalamadaka, C. V. Jawahar
Commercial applications like driver assistance programs in cars, smile detection softwares in cameras typically require reliable facial landmark points like the location of eyes, lips etc. and face pose at near real-time. Current methods are often unreliable, very cumbersome or computationally intensive. In this work, we focus on implementing a reliable and real-time method which parses an image and detects faces, estimates their pose and locates landmark points on the face. Our method builds on the existing literature. The method can work both for images and videos.
{"title":"Near real-time face parsing","authors":"A. Minocha, Digvijay Singh, Nataraj Jammalamadaka, C. V. Jawahar","doi":"10.1109/NCVPRIPG.2013.6776192","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776192","url":null,"abstract":"Commercial applications like driver assistance programs in cars, smile detection softwares in cameras typically require reliable facial landmark points like the location of eyes, lips etc. and face pose at near real-time. Current methods are often unreliable, very cumbersome or computationally intensive. In this work, we focus on implementing a reliable and real-time method which parses an image and detects faces, estimates their pose and locates landmark points on the face. Our method builds on the existing literature. The method can work both for images and videos.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133920665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776215
M. Sushma, Anubha Gupta, J. Sivaswamy
In this paper, we present a novel automated method to detect motion in perfusion weighted images (PWI), which is a type of magnetic resonance imaging (MRI). In PWI, blood perfusion is measured by injecting an exogenous tracer called bolus into the blood flow of a patient and then tracking it in the brain. PWI requires a long data acquisition time to form a time series of volumes. Hence, motion occurs due to patient's unavoidable movements during a scan, which in turn results into motion corrupted data. There is a necessity of detection of these motion artifacts on captured data for correct disease diagnosis. In PWI, intensity profile gets disturbed due to occurrence of motion and/or bolus passage through the blood vessels. There is no way to distinguish between motion occurrence and bolus passage. In this paper, we propose an efficient time-frequency analysis based motion detection method. We show that proposed method is computationally inexpensive and fast. This method is evaluated on a DSC-MRI sequence with simulated motion of different degrees. We show that our approach detects motion in a few seconds.
{"title":"Time-frequency analysis based motion detection in perfusion weighted MRI","authors":"M. Sushma, Anubha Gupta, J. Sivaswamy","doi":"10.1109/NCVPRIPG.2013.6776215","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776215","url":null,"abstract":"In this paper, we present a novel automated method to detect motion in perfusion weighted images (PWI), which is a type of magnetic resonance imaging (MRI). In PWI, blood perfusion is measured by injecting an exogenous tracer called bolus into the blood flow of a patient and then tracking it in the brain. PWI requires a long data acquisition time to form a time series of volumes. Hence, motion occurs due to patient's unavoidable movements during a scan, which in turn results into motion corrupted data. There is a necessity of detection of these motion artifacts on captured data for correct disease diagnosis. In PWI, intensity profile gets disturbed due to occurrence of motion and/or bolus passage through the blood vessels. There is no way to distinguish between motion occurrence and bolus passage. In this paper, we propose an efficient time-frequency analysis based motion detection method. We show that proposed method is computationally inexpensive and fast. This method is evaluated on a DSC-MRI sequence with simulated motion of different degrees. We show that our approach detects motion in a few seconds.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776221
Phool Preet, P. Chowdhury, G. S. Malik
Attentional Mechanism or Focus of Attention is the front end of object recognition systems with the task of rapidly reducing the search area in the image. In this paper we present correlation based template matching as an attentional mechanism for high resolution satellite images. We experimentally show that despite intra-class variations and object transformations, correlation based template matching can be deployed as attentional mechanism. Different image variants like gradient magnitude and gradient orientation are also compared for correlation matching. Based on the experiments a threshold selection mechanism is given.
{"title":"Correlation based object-specific attentional mechanism for target localization in high resolution satellite images","authors":"Phool Preet, P. Chowdhury, G. S. Malik","doi":"10.1109/NCVPRIPG.2013.6776221","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776221","url":null,"abstract":"Attentional Mechanism or Focus of Attention is the front end of object recognition systems with the task of rapidly reducing the search area in the image. In this paper we present correlation based template matching as an attentional mechanism for high resolution satellite images. We experimentally show that despite intra-class variations and object transformations, correlation based template matching can be deployed as attentional mechanism. Different image variants like gradient magnitude and gradient orientation are also compared for correlation matching. Based on the experiments a threshold selection mechanism is given.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776176
B. Deka, Kanchan Kumar Gorain, Navadeep Kalita, B. Das
This paper proposes a novel framework that unifies the concept of sparsity of a signal over a properly chosen basis set and the theory of signal reconstruction via compressed sensing in order to obtain a high-resolution image derived by using a single down-sampled version of the same image. First, we enforce sparse overcomplete representations on the low-resolution patches of the input image. Then, using the sparse coefficients as obtained above, we reconstruct a high-resolution output image. A blurring matrix is introduced in order to enhance the incoherency between the sparsifying dictionary and the sensing matrices which also resulted in better preservation of image edges and other textures. When compared with the similar techniques, the proposed method yields much better result both visually and quantitatively.
{"title":"Single image super-resolution using compressive sensing with learned overcomplete dictionary","authors":"B. Deka, Kanchan Kumar Gorain, Navadeep Kalita, B. Das","doi":"10.1109/NCVPRIPG.2013.6776176","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776176","url":null,"abstract":"This paper proposes a novel framework that unifies the concept of sparsity of a signal over a properly chosen basis set and the theory of signal reconstruction via compressed sensing in order to obtain a high-resolution image derived by using a single down-sampled version of the same image. First, we enforce sparse overcomplete representations on the low-resolution patches of the input image. Then, using the sparse coefficients as obtained above, we reconstruct a high-resolution output image. A blurring matrix is introduced in order to enhance the incoherency between the sparsifying dictionary and the sensing matrices which also resulted in better preservation of image edges and other textures. When compared with the similar techniques, the proposed method yields much better result both visually and quantitatively.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122522540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776199
Jag Mohan Singh
We present a simple and powerful scheme to allow CSG of implicit surfaces on the GPU. We decompose the boolean expression of surfaces into sum-of-products form. Our algorithm presented in this paper then renders each product term, sum of products can be automatically by enabling depth test. Our Approximate CSG uses adaptive marching points algorithm for finding ray-surface intersection. Once we find an interval where root exists after root-isolation, this is used for presence of intersection. We perform root-refinement only for the uncomplemented terms in the product. Exact CSG is done by using the discriminant of the ray-surface intersection for the presence of the root. Now we can simply evaluate the product expression by checking all uncomplemented terms should be true and all complemented terms should be false. If our condition is met, we find the maximum of all the roots among uncomplemented terms to be the solution. Our algorithm is linear in the number of terms O(n). We achieve real-time rates for 4-5 terms in the product for approximate CSG. We achieve more than real-time rates for Exact CSG. Our primitives are implicit surfaces so we can achieve fairly complex results with less terms.
{"title":"Real-time approximate and exact CSG of implicit surfaces on the GPU","authors":"Jag Mohan Singh","doi":"10.1109/NCVPRIPG.2013.6776199","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776199","url":null,"abstract":"We present a simple and powerful scheme to allow CSG of implicit surfaces on the GPU. We decompose the boolean expression of surfaces into sum-of-products form. Our algorithm presented in this paper then renders each product term, sum of products can be automatically by enabling depth test. Our Approximate CSG uses adaptive marching points algorithm for finding ray-surface intersection. Once we find an interval where root exists after root-isolation, this is used for presence of intersection. We perform root-refinement only for the uncomplemented terms in the product. Exact CSG is done by using the discriminant of the ray-surface intersection for the presence of the root. Now we can simply evaluate the product expression by checking all uncomplemented terms should be true and all complemented terms should be false. If our condition is met, we find the maximum of all the roots among uncomplemented terms to be the solution. Our algorithm is linear in the number of terms O(n). We achieve real-time rates for 4-5 terms in the product for approximate CSG. We achieve more than real-time rates for Exact CSG. Our primitives are implicit surfaces so we can achieve fairly complex results with less terms.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122552265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776260
B. Sathyabama, S. Roomi, R. EvangelineJenitaKamalam
The Classification of Targets in Synthetic Aperture Radar Images is greatly affected by scale, rotation and translation. This paper proposes a geometric invariant algorithm to classify military targets based on extracting cepstral features derived from the modified grid selection over spectral components of Fourier Mellin Transform. The proposed non uniform grid is formed by a window with a cell of 2×2 pixels at the center, surrounded by the cells of 4×4 pixels, and so on, with overlapping concept to extract better representative features. Further each cell is divided into upper and lower triangular bins. The energy of each bin forms the down sampled M×M data accounting the larger value between the two triangles so that the information is enhanced. The experiments are carried out with a total of 700 SAR images collected from MSTAR database with different combinations of rotation, scale and translations. The proposed method has been tested against existing methods such as Region Covariance, Co-differencing and 2D Mellin cepstrum with non- overlapping grids. The results from 2D-Mellin Cepstrum using the proposed grid formation have been observed to be better in terms of 92% detection accuracy compared with 86% for region covariance method and 89% for non-uniform grid formation method.
{"title":"Geometric invariant Target classification using 2D Mellin cepstrum with modified grid formation","authors":"B. Sathyabama, S. Roomi, R. EvangelineJenitaKamalam","doi":"10.1109/NCVPRIPG.2013.6776260","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776260","url":null,"abstract":"The Classification of Targets in Synthetic Aperture Radar Images is greatly affected by scale, rotation and translation. This paper proposes a geometric invariant algorithm to classify military targets based on extracting cepstral features derived from the modified grid selection over spectral components of Fourier Mellin Transform. The proposed non uniform grid is formed by a window with a cell of 2×2 pixels at the center, surrounded by the cells of 4×4 pixels, and so on, with overlapping concept to extract better representative features. Further each cell is divided into upper and lower triangular bins. The energy of each bin forms the down sampled M×M data accounting the larger value between the two triangles so that the information is enhanced. The experiments are carried out with a total of 700 SAR images collected from MSTAR database with different combinations of rotation, scale and translations. The proposed method has been tested against existing methods such as Region Covariance, Co-differencing and 2D Mellin cepstrum with non- overlapping grids. The results from 2D-Mellin Cepstrum using the proposed grid formation have been observed to be better in terms of 92% detection accuracy compared with 86% for region covariance method and 89% for non-uniform grid formation method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776165
Shweta Singh, D. V. Rao
With the introduction of unmanned air vehicles as force multipliers in the defense services worldwide, automatic recognition and identification of ground based targets has become an important area of research in the defense community. Due to inherent instabilities in smaller unmanned platforms, image blurredness and distortion need to be addressed for the successful recognition of the target. In this paper, an image enhancement technique that can improve images' quality acquired by an unmanned system is proposed. An image de-blurring technique based on blind de-convolution algorithm which adaptively enhances the edges of characters and wipes off blurredness effectively is proposed. A content-based image retrieval technique based on features extraction to generate an image description and a compact feature vector that represents the visual information, color, texture and shape is used with a minimum distance algorithm to effectively retrieve the plausible target images from a library of images stored in a target folder. This methodology was implemented for planning and gaming the UAV/UCAV missions in the Air Warfare Simulation System.
{"title":"Recognition and identification of target images using feature based retrieval in UAV missions","authors":"Shweta Singh, D. V. Rao","doi":"10.1109/NCVPRIPG.2013.6776165","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776165","url":null,"abstract":"With the introduction of unmanned air vehicles as force multipliers in the defense services worldwide, automatic recognition and identification of ground based targets has become an important area of research in the defense community. Due to inherent instabilities in smaller unmanned platforms, image blurredness and distortion need to be addressed for the successful recognition of the target. In this paper, an image enhancement technique that can improve images' quality acquired by an unmanned system is proposed. An image de-blurring technique based on blind de-convolution algorithm which adaptively enhances the edges of characters and wipes off blurredness effectively is proposed. A content-based image retrieval technique based on features extraction to generate an image description and a compact feature vector that represents the visual information, color, texture and shape is used with a minimum distance algorithm to effectively retrieve the plausible target images from a library of images stored in a target folder. This methodology was implemented for planning and gaming the UAV/UCAV missions in the Air Warfare Simulation System.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128882269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}