首页 > 最新文献

2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)最新文献

英文 中文
Parallel mesh regularization and resampling algorithm for improved mesh registration 改进网格配准的并行网格正则化和重采样算法
Sumandeep Banerjee, Somnath Dutta, P. Biswas, Partha Bhowmick
In this paper, we present a fast and efficient algorithm for regularization and resampling of triangular meshes generated by 3D reconstruction methods such as stereoscopy, laser scanning etc. We also present a scheme for efficient parallel implementation of the proposed algorithm and the time gain with increasing number of processor cores.
本文提出了一种快速有效的三角网格正则化和重采样算法,用于立体、激光扫描等三维重建方法生成的三角网格。我们还提出了一种有效并行实现算法的方案,并随着处理器核数的增加而增加时间增益。
{"title":"Parallel mesh regularization and resampling algorithm for improved mesh registration","authors":"Sumandeep Banerjee, Somnath Dutta, P. Biswas, Partha Bhowmick","doi":"10.1109/NCVPRIPG.2013.6776183","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776183","url":null,"abstract":"In this paper, we present a fast and efficient algorithm for regularization and resampling of triangular meshes generated by 3D reconstruction methods such as stereoscopy, laser scanning etc. We also present a scheme for efficient parallel implementation of the proposed algorithm and the time gain with increasing number of processor cores.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128988878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving video summarization based on user preferences 改进基于用户偏好的视频摘要
R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan
Although in the past, several automatic video summarization systems had been proposed to generate video summary, a generic summary based only on low-level features will not satisfy every user. As users' needs or preferences for the summary vastly differ for the same video, a unique personalized and customized video summarization system becomes an urgent need nowadays. To address this urgent need, this paper proposes a novel system for generating unique semantically meaningful video summaries for the same video, that are tailored to the preferences or interests of the users. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The experimental results on the performance of the proposed video summarization system are encouraging.
虽然过去已经提出了几种自动视频摘要系统来生成视频摘要,但仅基于底层特征的通用摘要并不能满足每个用户。由于用户对同一段视频的摘要需求或偏好差异很大,因此迫切需要一个独特的个性化、定制化的视频摘要系统。为了解决这一迫切需求,本文提出了一种新的系统,可以根据用户的偏好或兴趣为同一视频生成独特的语义有意义的视频摘要。所提出的系统根据摘要时间跨度和与用户偏好在语义上相关的排名靠前的镜头缝合视频摘要。实验结果表明,所提出的视频摘要系统的性能令人鼓舞。
{"title":"Improving video summarization based on user preferences","authors":"R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan","doi":"10.1109/NCVPRIPG.2013.6776187","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776187","url":null,"abstract":"Although in the past, several automatic video summarization systems had been proposed to generate video summary, a generic summary based only on low-level features will not satisfy every user. As users' needs or preferences for the summary vastly differ for the same video, a unique personalized and customized video summarization system becomes an urgent need nowadays. To address this urgent need, this paper proposes a novel system for generating unique semantically meaningful video summaries for the same video, that are tailored to the preferences or interests of the users. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The experimental results on the performance of the proposed video summarization system are encouraging.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A robust faint line detection and enhancement algorithm for mural images 一种鲁棒的壁画图像微弱线检测与增强算法
Mrinmoy Ghorai, B. Chanda
Mural images are noisy and consist of faint and broken lines. Here we propose a novel technique for straight and curve line detection and also an enhancement algorithm for deteriorated mural images. First we compute some statistics on gray image using oriented templates. The outcome of the process are taken as a strength of the line at each pixel. As a result some unwanted lines are also detected in the texture region. Based on Gestalt law of continuity we propose an anisotropic refinement to strengthen the true lines and to suppress the unwanted ones. A modified bilateral filter is employed to remove the noises. Experimental result shows that the approach is robust to restore the lines in the mural images.
壁画图像是嘈杂的,由微弱和破碎的线条组成。本文提出了一种新的直线和曲线检测技术,以及一种针对劣化壁画图像的增强算法。首先利用定向模板对灰度图像进行统计。该过程的结果被视为每个像素处的线的强度。结果在纹理区域也检测到一些不需要的线。基于格式塔连续性定律,提出了一种各向异性的细化方法,以增强真实线条,抑制不需要的线条。采用一种改进的双边滤波器来去除噪声。实验结果表明,该方法具有较好的复原壁画线条的鲁棒性。
{"title":"A robust faint line detection and enhancement algorithm for mural images","authors":"Mrinmoy Ghorai, B. Chanda","doi":"10.1109/NCVPRIPG.2013.6776175","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776175","url":null,"abstract":"Mural images are noisy and consist of faint and broken lines. Here we propose a novel technique for straight and curve line detection and also an enhancement algorithm for deteriorated mural images. First we compute some statistics on gray image using oriented templates. The outcome of the process are taken as a strength of the line at each pixel. As a result some unwanted lines are also detected in the texture region. Based on Gestalt law of continuity we propose an anisotropic refinement to strengthen the true lines and to suppress the unwanted ones. A modified bilateral filter is employed to remove the noises. Experimental result shows that the approach is robust to restore the lines in the mural images.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134599349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improvised eigenvector selection for spectral Clustering in image segmentation 基于特征向量选择的光谱聚类图像分割
Aditya Prakash, S. Balasubramanian, R. R. Sarma
General spectral Clustering(SC) algorithms employ top eigenvectors of normalized Laplacian for spectral rounding. However, recent research has pointed out that in case of noisy and sparse data, all top eigenvectors may not be informative or relevant for the purpose of clustering. Use of these eigenvectors for spectral rounding may lead to bad clustering results. Self-tuning SC method proposed by Zelnik and Perona [1] places a very stringent condition of best alignment possible with canonical coordinate system for selection of relevant eigenvectors. We analyse their algorithm and relax the best alignment criterion to an average alignment criterion. We demonstrate the effectiveness of our improvisation on synthetic as well as natural images by comparing the results using Berkeley segmentation and benchmarking dataset.
一般谱聚类算法采用归一化拉普拉斯的顶特征向量进行谱舍入。然而,最近的研究指出,在有噪声和稀疏数据的情况下,所有的顶部特征向量可能不具有信息性或相关性,无法用于聚类。使用这些特征向量进行光谱舍入可能导致不好的聚类结果。Zelnik和Perona[1]提出的自调谐SC方法对相关特征向量的选择提出了非常严格的与规范坐标系可能的最佳对准条件。分析了它们的算法,将最佳对齐准则简化为平均对齐准则。通过比较使用伯克利分割和基准数据集的结果,我们证明了我们在合成图像和自然图像上的即兴创作的有效性。
{"title":"Improvised eigenvector selection for spectral Clustering in image segmentation","authors":"Aditya Prakash, S. Balasubramanian, R. R. Sarma","doi":"10.1109/NCVPRIPG.2013.6776233","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776233","url":null,"abstract":"General spectral Clustering(SC) algorithms employ top eigenvectors of normalized Laplacian for spectral rounding. However, recent research has pointed out that in case of noisy and sparse data, all top eigenvectors may not be informative or relevant for the purpose of clustering. Use of these eigenvectors for spectral rounding may lead to bad clustering results. Self-tuning SC method proposed by Zelnik and Perona [1] places a very stringent condition of best alignment possible with canonical coordinate system for selection of relevant eigenvectors. We analyse their algorithm and relax the best alignment criterion to an average alignment criterion. We demonstrate the effectiveness of our improvisation on synthetic as well as natural images by comparing the results using Berkeley segmentation and benchmarking dataset.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124783337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pan-sharpening based on Non-subsampled Contourlet Transform detail extraction 基于非下采样Contourlet变换细节提取的泛锐化
Kishor P. Upla, P. Gajjar, M. Joshi
In this paper, we propose a new pan-sharpening method using Non-subsampled Contourlet Transform. The panchromatic (Pan) and multi-spectral (MS) images provided by many satellites have high spatial and high spectral resolutions, respectively. The pan-sharpened image which has high spatial and spectral resolutions is obtained by using these images. Since the NSCT is shift invariant and it has better directional decomposition capability compared to contourlet transform, we use it to extract high frequency information from the available Pan image. First, two level NSCT decomposition is performed on the Pan image which has high spatial resolution. The required high frequency details are obtained by using the coarser subband available after the two level NSCT decomposition of the Pan image. The coarser sub-band is subtracted from the original Pan image to obtain these details. These extracted details are then added to MS image such that the original spectral signature is preserved in the final fused image. The experiments have been conducted on images captured from different satellite sensors such as IKonos-2, Worlview-2 and Quickbird. The traditional quantitative measures along with quality with no reference (QNR) index are evaluated to check the potential of the proposed method. The proposed approach performs better compared to the recently proposed state of the art methods such as additive wavelet luminance proportional (AWLP) method and context based decision (CBD) method.
本文提出了一种基于非下采样Contourlet变换的泛锐化方法。许多卫星提供的全色(Pan)和多光谱(MS)图像分别具有高空间分辨率和高光谱分辨率。利用这些图像得到了具有较高空间分辨率和光谱分辨率的泛锐化图像。由于NSCT是平移不变性的,并且与contourlet变换相比,它具有更好的方向分解能力,我们使用它从可用的Pan图像中提取高频信息。首先,对具有高空间分辨率的Pan图像进行二级NSCT分解;对Pan图像进行两级NSCT分解后,利用可用的粗子带获得所需的高频细节。从原始Pan图像中减去较粗的子带以获得这些细节。然后将这些提取的细节添加到MS图像中,从而在最终融合图像中保留原始光谱特征。实验是在IKonos-2、worldview -2和Quickbird等不同卫星传感器拍摄的图像上进行的。通过对传统定量指标和无参考质量(QNR)指标的评价,验证了该方法的可行性。与近年来提出的加性小波亮度比例(AWLP)方法和基于上下文的决策(CBD)方法相比,该方法具有更好的性能。
{"title":"Pan-sharpening based on Non-subsampled Contourlet Transform detail extraction","authors":"Kishor P. Upla, P. Gajjar, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776258","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776258","url":null,"abstract":"In this paper, we propose a new pan-sharpening method using Non-subsampled Contourlet Transform. The panchromatic (Pan) and multi-spectral (MS) images provided by many satellites have high spatial and high spectral resolutions, respectively. The pan-sharpened image which has high spatial and spectral resolutions is obtained by using these images. Since the NSCT is shift invariant and it has better directional decomposition capability compared to contourlet transform, we use it to extract high frequency information from the available Pan image. First, two level NSCT decomposition is performed on the Pan image which has high spatial resolution. The required high frequency details are obtained by using the coarser subband available after the two level NSCT decomposition of the Pan image. The coarser sub-band is subtracted from the original Pan image to obtain these details. These extracted details are then added to MS image such that the original spectral signature is preserved in the final fused image. The experiments have been conducted on images captured from different satellite sensors such as IKonos-2, Worlview-2 and Quickbird. The traditional quantitative measures along with quality with no reference (QNR) index are evaluated to check the potential of the proposed method. The proposed approach performs better compared to the recently proposed state of the art methods such as additive wavelet luminance proportional (AWLP) method and context based decision (CBD) method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124852147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
MRF and DP based specular surface reconstruction 基于MRF和DP的镜面重建
K. RavindraRedddy, A. Namboodiri
This paper addresses the problem of reconstruction of specular surfaces using a combination of Dynamic Programming and Markov Random Fields formulation. Unlike traditional methods that require the exact position of environment points to be known, our method requires only the relative position of the environment points to be known for computing approximate normals and infer shape from them. We present an approach which estimates the depth from dynamic programming routine and MRF stereo matching and use MRF optimization to fuse the results to get the robust estimate of shape. We used smooth color gradient image as our environment texture so that shape can be recovered using just a single shot. We evaluate our method using synthetic experiments on 3D models like Stanford bunny and show the real experiment results on golden statue and silver coated statue.
本文将动态规划与马尔可夫随机场公式相结合,讨论了镜面的重建问题。与传统方法需要知道环境点的确切位置不同,我们的方法只需要知道环境点的相对位置,就可以计算近似法线并从中推断形状。提出了一种从动态规划程序和MRF立体匹配中估计深度的方法,并利用MRF优化对结果进行融合以获得形状的鲁棒估计。我们使用平滑的颜色渐变图像作为我们的环境纹理,这样形状就可以用一个镜头恢复。在斯坦福兔等三维模型上进行了综合实验,并展示了金像和镀银像的真实实验结果。
{"title":"MRF and DP based specular surface reconstruction","authors":"K. RavindraRedddy, A. Namboodiri","doi":"10.1109/NCVPRIPG.2013.6776239","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776239","url":null,"abstract":"This paper addresses the problem of reconstruction of specular surfaces using a combination of Dynamic Programming and Markov Random Fields formulation. Unlike traditional methods that require the exact position of environment points to be known, our method requires only the relative position of the environment points to be known for computing approximate normals and infer shape from them. We present an approach which estimates the depth from dynamic programming routine and MRF stereo matching and use MRF optimization to fuse the results to get the robust estimate of shape. We used smooth color gradient image as our environment texture so that shape can be recovered using just a single shot. We evaluate our method using synthetic experiments on 3D models like Stanford bunny and show the real experiment results on golden statue and silver coated statue.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121144122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancement of camera captured text images with specular reflection 通过镜面反射增强相机捕获的文本图像
A. Visvanathan, T. Chattopadhyay, U. Bhattacharya
Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.
光的镜面反射会降低场景图像的质量。每当镜面反射影响这样的图像的文字部分,其可读性大大降低。因此,OCR软件很难检测和识别相似的文本。在本工作中,我们提出了一种新颖而简单的技术来增强图像的镜面反射区域。在YUV色平面上对具有镜面反射的像素点进行识别。在下一步中,它通过插值YUV空间中可能的像素值来增强该区域。提出的方法已经与一些现有的通用图像增强技术进行了比较,这些技术包括(i)直方图均衡化,(ii)伽马校正和(iii)基于拉普拉斯滤波的增强方法。该方法已在ICDAR 2003鲁棒阅读竞赛图像数据库的部分图像上进行了测试。我们计算了一个基于平均意见分数的度量,表明所提出的方法优于现有的增强技术,可以增强受镜面反射影响的图像中的文本的可读性。
{"title":"Enhancement of camera captured text images with specular reflection","authors":"A. Visvanathan, T. Chattopadhyay, U. Bhattacharya","doi":"10.1109/NCVPRIPG.2013.6776189","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776189","url":null,"abstract":"Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116848309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fusion of satellite images using Compressive Sampling Matching Pursuit (CoSaMP) method 基于压缩采样匹配追踪(CoSaMP)方法的卫星图像融合
B. Sathyabama, S. Sankari, S. Nayagara
Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution Panchromatic (HRPAN) image is a very important topic in the field of remote sensing. This paper handles the fusion of satellite images with sparse representation of data. The High resolution MS image is produced from the sparse, reconstructed from HRPAN and LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2 satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the proposed method. The experimental results show that this method can well preserve spectral and spatial details of the input images by adaptive learning. While compared to other well-known methods the proposed method offers high quality results to the input images by providing 87.28% Quality with No Reference (QNR).
低分辨率多光谱(LRMS)图像与高分辨率全色(HRPAN)图像的融合是遥感领域的一个重要课题。本文研究了数据稀疏表示的卫星图像融合问题。利用基于正交匹配追踪(OMP)算法的压缩采样匹配追踪(CoSaMP)对HRPAN和LRMS图像进行稀疏重建,得到高分辨率的MS图像。稀疏系数是通过将LR MS图像块与LR PAN字典相关联得到的。通过将稀疏系数与HR PAN字典进行卷积形成HRMS。使用泰米尔纳德邦马杜赖的世界-2卫星图像(HRPAN和LRMS)来测试所提出的方法。实验结果表明,该方法通过自适应学习可以很好地保留输入图像的光谱和空间细节。与其他已知的方法相比,该方法可以提供87.28%的无参考质量(QNR),从而获得高质量的输入图像。
{"title":"Fusion of satellite images using Compressive Sampling Matching Pursuit (CoSaMP) method","authors":"B. Sathyabama, S. Sankari, S. Nayagara","doi":"10.1109/NCVPRIPG.2013.6776256","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776256","url":null,"abstract":"Fusion of Low Resolution Multi Spectral (LRMS) image and High Resolution Panchromatic (HRPAN) image is a very important topic in the field of remote sensing. This paper handles the fusion of satellite images with sparse representation of data. The High resolution MS image is produced from the sparse, reconstructed from HRPAN and LRMS images using Compressive Sampling Matching Pursuit (CoSaMP) based on Orthogonal Matching Pursuit (OMP) algorithm. Sparse coefficients are produced by correlating the LR MS image patches with the LR PAN dictionary. The HRMS is formed by convolving the Sparse coefficients with the HR PAN dictionary. The world view -2 satellite images (HRPAN and LRMS) of Madurai, Tamil Nadu are used to test the proposed method. The experimental results show that this method can well preserve spectral and spatial details of the input images by adaptive learning. While compared to other well-known methods the proposed method offers high quality results to the input images by providing 87.28% Quality with No Reference (QNR).","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127213784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Despeckling SAR images in the lapped transform domain 叠置变换域SAR图像去斑
D. Hazarika, M. Bhuyan
In this paper, a novel lapped transform (LT) based approach to SAR image despeckling is introduced. It is shown that LT coefficients of the log transformed, noise free SAR images, obey Generalized Gaussian distribution. The proposed method uses a Bayesian minimum mean square error (MMSE) estimator which is based on modeling the global distribution of the rearranged LT coefficients in a subband using Generalized Gaussian distribution. Finally the proposed algorithm is implemented in cycle spinning mode to compensate for the lack of translation invariance property of LT. Experiments are carried out using synthetically speckled natural and SAR images. The proposed Bayesian based technique in LT based framework, when compared with several existing despeckling techniques, yields very good despeckling results while preserving the important details and textural information of the scene.
提出了一种基于叠置变换(LT)的SAR图像去斑算法。结果表明,经对数变换后的无噪声SAR图像的LT系数服从广义高斯分布。该方法采用贝叶斯最小均方误差(MMSE)估计量,该估计量基于广义高斯分布对子带重排LT系数的全局分布进行建模。最后,在循环旋转模式下实现了该算法,以弥补ltt平移不变性的不足。利用自然和SAR图像进行了实验。与现有的几种去斑技术相比,本文提出的基于贝叶斯的去斑技术在保留场景重要细节和纹理信息的同时,取得了非常好的去斑效果。
{"title":"Despeckling SAR images in the lapped transform domain","authors":"D. Hazarika, M. Bhuyan","doi":"10.1109/NCVPRIPG.2013.6776255","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776255","url":null,"abstract":"In this paper, a novel lapped transform (LT) based approach to SAR image despeckling is introduced. It is shown that LT coefficients of the log transformed, noise free SAR images, obey Generalized Gaussian distribution. The proposed method uses a Bayesian minimum mean square error (MMSE) estimator which is based on modeling the global distribution of the rearranged LT coefficients in a subband using Generalized Gaussian distribution. Finally the proposed algorithm is implemented in cycle spinning mode to compensate for the lack of translation invariance property of LT. Experiments are carried out using synthetically speckled natural and SAR images. The proposed Bayesian based technique in LT based framework, when compared with several existing despeckling techniques, yields very good despeckling results while preserving the important details and textural information of the scene.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126579689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Multi-resolution image fusion using multistage guided filter 基于多级引导滤波器的多分辨率图像融合
Sharad Joshi, Kishor P. Upla, M. Joshi
In this paper, we propose a multi-resolution image fusion approach based on multistage guided filter (MGF). Given the high spatial resolution panchromatic (Pan) and high spectral resolution multi-spectral (MS) images, the multi-resolution image fusion algorithm obtains a single fused image having both the high spectral and the high spatial resolutions. Here, we extract the missing high frequency details of MS image by using multistage guided filter. The detail extraction process exploits the relationship between the Pan and MS images by utilizing one of them as a guidance image and extracting details from the other. This way the spatial distortion of MS image is reduced by consistently combining the details obtained using both types of images. The final fused image is obtained by adding the extracted high frequency details to corresponding MS image. The results of the proposed algorithm are compared with the commonly used traditional methods as well as with a recently proposed method using Quickbird, Ikonos-2 and Worldview-2 satellite images. The quantitative assessment is evaluated using the conventional measures as well as using a relatively new index i.e., quality with no reference (QNR) which does not require a reference image. The results and measures clearly show that there is significant improvement in the quality of the fused image using the proposed approach.
本文提出了一种基于多级引导滤波(MGF)的多分辨率图像融合方法。针对高空间分辨率全色(Pan)图像和高光谱分辨率多光谱(MS)图像,采用多分辨率图像融合算法得到高光谱分辨率和高空间分辨率的融合图像。在这里,我们使用多级引导滤波器提取MS图像中缺失的高频细节。细节提取过程利用Pan和MS图像之间的关系,利用其中一幅图像作为引导图像,从另一幅图像中提取细节。这样,通过一致地结合使用两种类型的图像获得的细节,减少了MS图像的空间畸变。将提取的高频细节与相应的MS图像相加,得到最终的融合图像。将该算法与常用的传统方法进行了比较,并与最近提出的基于Quickbird、Ikonos-2和Worldview-2卫星图像的方法进行了比较。定量评估使用传统的措施,以及使用一个相对较新的指标,即质量无参考(QNR),不需要参考图像进行评估。结果和测量清楚地表明,使用该方法融合图像的质量有显着提高。
{"title":"Multi-resolution image fusion using multistage guided filter","authors":"Sharad Joshi, Kishor P. Upla, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776257","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776257","url":null,"abstract":"In this paper, we propose a multi-resolution image fusion approach based on multistage guided filter (MGF). Given the high spatial resolution panchromatic (Pan) and high spectral resolution multi-spectral (MS) images, the multi-resolution image fusion algorithm obtains a single fused image having both the high spectral and the high spatial resolutions. Here, we extract the missing high frequency details of MS image by using multistage guided filter. The detail extraction process exploits the relationship between the Pan and MS images by utilizing one of them as a guidance image and extracting details from the other. This way the spatial distortion of MS image is reduced by consistently combining the details obtained using both types of images. The final fused image is obtained by adding the extracted high frequency details to corresponding MS image. The results of the proposed algorithm are compared with the commonly used traditional methods as well as with a recently proposed method using Quickbird, Ikonos-2 and Worldview-2 satellite images. The quantitative assessment is evaluated using the conventional measures as well as using a relatively new index i.e., quality with no reference (QNR) which does not require a reference image. The results and measures clearly show that there is significant improvement in the quality of the fused image using the proposed approach.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129178476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1