Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776183
Sumandeep Banerjee, Somnath Dutta, P. Biswas, Partha Bhowmick
In this paper, we present a fast and efficient algorithm for regularization and resampling of triangular meshes generated by 3D reconstruction methods such as stereoscopy, laser scanning etc. We also present a scheme for efficient parallel implementation of the proposed algorithm and the time gain with increasing number of processor cores.
{"title":"Parallel mesh regularization and resampling algorithm for improved mesh registration","authors":"Sumandeep Banerjee, Somnath Dutta, P. Biswas, Partha Bhowmick","doi":"10.1109/NCVPRIPG.2013.6776183","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776183","url":null,"abstract":"In this paper, we present a fast and efficient algorithm for regularization and resampling of triangular meshes generated by 3D reconstruction methods such as stereoscopy, laser scanning etc. We also present a scheme for efficient parallel implementation of the proposed algorithm and the time gain with increasing number of processor cores.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128988878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776187
R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan
Although in the past, several automatic video summarization systems had been proposed to generate video summary, a generic summary based only on low-level features will not satisfy every user. As users' needs or preferences for the summary vastly differ for the same video, a unique personalized and customized video summarization system becomes an urgent need nowadays. To address this urgent need, this paper proposes a novel system for generating unique semantically meaningful video summaries for the same video, that are tailored to the preferences or interests of the users. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The experimental results on the performance of the proposed video summarization system are encouraging.
{"title":"Improving video summarization based on user preferences","authors":"R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan","doi":"10.1109/NCVPRIPG.2013.6776187","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776187","url":null,"abstract":"Although in the past, several automatic video summarization systems had been proposed to generate video summary, a generic summary based only on low-level features will not satisfy every user. As users' needs or preferences for the summary vastly differ for the same video, a unique personalized and customized video summarization system becomes an urgent need nowadays. To address this urgent need, this paper proposes a novel system for generating unique semantically meaningful video summaries for the same video, that are tailored to the preferences or interests of the users. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The experimental results on the performance of the proposed video summarization system are encouraging.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776175
Mrinmoy Ghorai, B. Chanda
Mural images are noisy and consist of faint and broken lines. Here we propose a novel technique for straight and curve line detection and also an enhancement algorithm for deteriorated mural images. First we compute some statistics on gray image using oriented templates. The outcome of the process are taken as a strength of the line at each pixel. As a result some unwanted lines are also detected in the texture region. Based on Gestalt law of continuity we propose an anisotropic refinement to strengthen the true lines and to suppress the unwanted ones. A modified bilateral filter is employed to remove the noises. Experimental result shows that the approach is robust to restore the lines in the mural images.
{"title":"A robust faint line detection and enhancement algorithm for mural images","authors":"Mrinmoy Ghorai, B. Chanda","doi":"10.1109/NCVPRIPG.2013.6776175","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776175","url":null,"abstract":"Mural images are noisy and consist of faint and broken lines. Here we propose a novel technique for straight and curve line detection and also an enhancement algorithm for deteriorated mural images. First we compute some statistics on gray image using oriented templates. The outcome of the process are taken as a strength of the line at each pixel. As a result some unwanted lines are also detected in the texture region. Based on Gestalt law of continuity we propose an anisotropic refinement to strengthen the true lines and to suppress the unwanted ones. A modified bilateral filter is employed to remove the noises. Experimental result shows that the approach is robust to restore the lines in the mural images.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134599349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776233
Aditya Prakash, S. Balasubramanian, R. R. Sarma
General spectral Clustering(SC) algorithms employ top eigenvectors of normalized Laplacian for spectral rounding. However, recent research has pointed out that in case of noisy and sparse data, all top eigenvectors may not be informative or relevant for the purpose of clustering. Use of these eigenvectors for spectral rounding may lead to bad clustering results. Self-tuning SC method proposed by Zelnik and Perona [1] places a very stringent condition of best alignment possible with canonical coordinate system for selection of relevant eigenvectors. We analyse their algorithm and relax the best alignment criterion to an average alignment criterion. We demonstrate the effectiveness of our improvisation on synthetic as well as natural images by comparing the results using Berkeley segmentation and benchmarking dataset.
{"title":"Improvised eigenvector selection for spectral Clustering in image segmentation","authors":"Aditya Prakash, S. Balasubramanian, R. R. Sarma","doi":"10.1109/NCVPRIPG.2013.6776233","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776233","url":null,"abstract":"General spectral Clustering(SC) algorithms employ top eigenvectors of normalized Laplacian for spectral rounding. However, recent research has pointed out that in case of noisy and sparse data, all top eigenvectors may not be informative or relevant for the purpose of clustering. Use of these eigenvectors for spectral rounding may lead to bad clustering results. Self-tuning SC method proposed by Zelnik and Perona [1] places a very stringent condition of best alignment possible with canonical coordinate system for selection of relevant eigenvectors. We analyse their algorithm and relax the best alignment criterion to an average alignment criterion. We demonstrate the effectiveness of our improvisation on synthetic as well as natural images by comparing the results using Berkeley segmentation and benchmarking dataset.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124783337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776258
Kishor P. Upla, P. Gajjar, M. Joshi
In this paper, we propose a new pan-sharpening method using Non-subsampled Contourlet Transform. The panchromatic (Pan) and multi-spectral (MS) images provided by many satellites have high spatial and high spectral resolutions, respectively. The pan-sharpened image which has high spatial and spectral resolutions is obtained by using these images. Since the NSCT is shift invariant and it has better directional decomposition capability compared to contourlet transform, we use it to extract high frequency information from the available Pan image. First, two level NSCT decomposition is performed on the Pan image which has high spatial resolution. The required high frequency details are obtained by using the coarser subband available after the two level NSCT decomposition of the Pan image. The coarser sub-band is subtracted from the original Pan image to obtain these details. These extracted details are then added to MS image such that the original spectral signature is preserved in the final fused image. The experiments have been conducted on images captured from different satellite sensors such as IKonos-2, Worlview-2 and Quickbird. The traditional quantitative measures along with quality with no reference (QNR) index are evaluated to check the potential of the proposed method. The proposed approach performs better compared to the recently proposed state of the art methods such as additive wavelet luminance proportional (AWLP) method and context based decision (CBD) method.
{"title":"Pan-sharpening based on Non-subsampled Contourlet Transform detail extraction","authors":"Kishor P. Upla, P. Gajjar, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776258","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776258","url":null,"abstract":"In this paper, we propose a new pan-sharpening method using Non-subsampled Contourlet Transform. The panchromatic (Pan) and multi-spectral (MS) images provided by many satellites have high spatial and high spectral resolutions, respectively. The pan-sharpened image which has high spatial and spectral resolutions is obtained by using these images. Since the NSCT is shift invariant and it has better directional decomposition capability compared to contourlet transform, we use it to extract high frequency information from the available Pan image. First, two level NSCT decomposition is performed on the Pan image which has high spatial resolution. The required high frequency details are obtained by using the coarser subband available after the two level NSCT decomposition of the Pan image. The coarser sub-band is subtracted from the original Pan image to obtain these details. These extracted details are then added to MS image such that the original spectral signature is preserved in the final fused image. The experiments have been conducted on images captured from different satellite sensors such as IKonos-2, Worlview-2 and Quickbird. The traditional quantitative measures along with quality with no reference (QNR) index are evaluated to check the potential of the proposed method. The proposed approach performs better compared to the recently proposed state of the art methods such as additive wavelet luminance proportional (AWLP) method and context based decision (CBD) method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124852147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776239
K. RavindraRedddy, A. Namboodiri
This paper addresses the problem of reconstruction of specular surfaces using a combination of Dynamic Programming and Markov Random Fields formulation. Unlike traditional methods that require the exact position of environment points to be known, our method requires only the relative position of the environment points to be known for computing approximate normals and infer shape from them. We present an approach which estimates the depth from dynamic programming routine and MRF stereo matching and use MRF optimization to fuse the results to get the robust estimate of shape. We used smooth color gradient image as our environment texture so that shape can be recovered using just a single shot. We evaluate our method using synthetic experiments on 3D models like Stanford bunny and show the real experiment results on golden statue and silver coated statue.
{"title":"MRF and DP based specular surface reconstruction","authors":"K. RavindraRedddy, A. Namboodiri","doi":"10.1109/NCVPRIPG.2013.6776239","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776239","url":null,"abstract":"This paper addresses the problem of reconstruction of specular surfaces using a combination of Dynamic Programming and Markov Random Fields formulation. Unlike traditional methods that require the exact position of environment points to be known, our method requires only the relative position of the environment points to be known for computing approximate normals and infer shape from them. We present an approach which estimates the depth from dynamic programming routine and MRF stereo matching and use MRF optimization to fuse the results to get the robust estimate of shape. We used smooth color gradient image as our environment texture so that shape can be recovered using just a single shot. We evaluate our method using synthetic experiments on 3D models like Stanford bunny and show the real experiment results on golden statue and silver coated statue.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121144122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776189
A. Visvanathan, T. Chattopadhyay, U. Bhattacharya
Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.
{"title":"Enhancement of camera captured text images with specular reflection","authors":"A. Visvanathan, T. Chattopadhyay, U. Bhattacharya","doi":"10.1109/NCVPRIPG.2013.6776189","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776189","url":null,"abstract":"Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116848309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776256
B. Sathyabama, S. Sankari, S. Nayagara
Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution Panchromatic (HRPAN) image is a very important topic in the field of remote sensing. This paper handles the fusion of satellite images with sparse representation of data. The High resolution MS image is produced from the sparse, reconstructed from HRPAN and LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2 satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the proposed method. The experimental results show that this method can well preserve spectral and spatial details of the input images by adaptive learning. While compared to other well-known methods the proposed method offers high quality results to the input images by providing 87.28% Quality with No Reference (QNR).
{"title":"Fusion of satellite images using Compressive Sampling Matching Pursuit (CoSaMP) method","authors":"B. Sathyabama, S. Sankari, S. Nayagara","doi":"10.1109/NCVPRIPG.2013.6776256","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776256","url":null,"abstract":"Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution Panchromatic (HRPAN) image is a very important topic in the field of remote sensing. This paper handles the fusion of satellite images with sparse representation of data. The High resolution MS image is produced from the sparse, reconstructed from HRPAN and LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2 satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the proposed method. The experimental results show that this method can well preserve spectral and spatial details of the input images by adaptive learning. While compared to other well-known methods the proposed method offers high quality results to the input images by providing 87.28% Quality with No Reference (QNR).","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127213784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776255
D. Hazarika, M. Bhuyan
In this paper, a novel lapped transform (LT) based approach to SAR image despeckling is introduced. It is shown that LT coefficients of the log transformed, noise free SAR images, obey Generalized Gaussian distribution. The proposed method uses a Bayesian minimum mean square error (MMSE) estimator which is based on modeling the global distribution of the rearranged LT coefficients in a subband using Generalized Gaussian distribution. Finally the proposed algorithm is implemented in cycle spinning mode to compensate for the lack of translation invariance property of LT. Experiments are carried out using synthetically speckled natural and SAR images. The proposed Bayesian based technique in LT based framework, when compared with several existing despeckling techniques, yields very good despeckling results while preserving the important details and textural information of the scene.
{"title":"Despeckling SAR images in the lapped transform domain","authors":"D. Hazarika, M. Bhuyan","doi":"10.1109/NCVPRIPG.2013.6776255","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776255","url":null,"abstract":"In this paper, a novel lapped transform (LT) based approach to SAR image despeckling is introduced. It is shown that LT coefficients of the log transformed, noise free SAR images, obey Generalized Gaussian distribution. The proposed method uses a Bayesian minimum mean square error (MMSE) estimator which is based on modeling the global distribution of the rearranged LT coefficients in a subband using Generalized Gaussian distribution. Finally the proposed algorithm is implemented in cycle spinning mode to compensate for the lack of translation invariance property of LT. Experiments are carried out using synthetically speckled natural and SAR images. The proposed Bayesian based technique in LT based framework, when compared with several existing despeckling techniques, yields very good despeckling results while preserving the important details and textural information of the scene.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126579689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-01DOI: 10.1109/NCVPRIPG.2013.6776257
Sharad Joshi, Kishor P. Upla, M. Joshi
In this paper, we propose a multi-resolution image fusion approach based on multistage guided filter (MGF). Given the high spatial resolution panchromatic (Pan) and high spectral resolution multi-spectral (MS) images, the multi-resolution image fusion algorithm obtains a single fused image having both the high spectral and the high spatial resolutions. Here, we extract the missing high frequency details of MS image by using multistage guided filter. The detail extraction process exploits the relationship between the Pan and MS images by utilizing one of them as a guidance image and extracting details from the other. This way the spatial distortion of MS image is reduced by consistently combining the details obtained using both types of images. The final fused image is obtained by adding the extracted high frequency details to corresponding MS image. The results of the proposed algorithm are compared with the commonly used traditional methods as well as with a recently proposed method using Quickbird, Ikonos-2 and Worldview-2 satellite images. The quantitative assessment is evaluated using the conventional measures as well as using a relatively new index i.e., quality with no reference (QNR) which does not require a reference image. The results and measures clearly show that there is significant improvement in the quality of the fused image using the proposed approach.
{"title":"Multi-resolution image fusion using multistage guided filter","authors":"Sharad Joshi, Kishor P. Upla, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776257","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776257","url":null,"abstract":"In this paper, we propose a multi-resolution image fusion approach based on multistage guided filter (MGF). Given the high spatial resolution panchromatic (Pan) and high spectral resolution multi-spectral (MS) images, the multi-resolution image fusion algorithm obtains a single fused image having both the high spectral and the high spatial resolutions. Here, we extract the missing high frequency details of MS image by using multistage guided filter. The detail extraction process exploits the relationship between the Pan and MS images by utilizing one of them as a guidance image and extracting details from the other. This way the spatial distortion of MS image is reduced by consistently combining the details obtained using both types of images. The final fused image is obtained by adding the extracted high frequency details to corresponding MS image. The results of the proposed algorithm are compared with the commonly used traditional methods as well as with a recently proposed method using Quickbird, Ikonos-2 and Worldview-2 satellite images. The quantitative assessment is evaluated using the conventional measures as well as using a relatively new index i.e., quality with no reference (QNR) which does not require a reference image. The results and measures clearly show that there is significant improvement in the quality of the fused image using the proposed approach.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129178476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}