Pub Date : 2012-04-28DOI: 10.1109/ICCPhot.2012.6215222
Vivek Kwatra, Mei Han, Shengyang Dai
We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.
{"title":"Shadow removal for aerial imagery by information theoretic intrinsic image analysis","authors":"Vivek Kwatra, Mei Han, Shengyang Dai","doi":"10.1109/ICCPhot.2012.6215222","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215222","url":null,"abstract":"We present a novel technique for shadow removal based on an information theoretic approach to intrinsic image analysis. Our key observation is that any illumination change in the scene tends to increase the entropy of observed texture intensities. Similarly, the presence of texture in the scene increases the entropy of the illumination function. Consequently, we formulate the separation of an image into texture and illumination components as minimization of entropies of each component. We employ a non-parametric kernel-based quadratic entropy formulation, and present an efficient multi-scale iterative optimization algorithm for minimization of the resulting energy functional. Our technique may be employed either fully automatically, using a proposed learning based method for automatic initialization, or alternatively with small amount of user interaction. As we demonstrate, our method is particularly suitable for aerial images, which consist of either distinctive texture patterns, e.g. building facades, or soft shadows with large diffuse regions, e.g. cloud shadows.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129946078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-01DOI: 10.1109/ICCPhot.2012.6215214
Y. Poleg, Shmuel Peleg
Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of overlap for alignment and for seamless stitching. Without image overlap current methods are helpless, and this is the case we address in this paper. So if a traveler wants to create a panoramic mosaic of a scene from pictures he has taken, but realizes back home that his pictures do not overlap, there is still hope. The proposed process has three stages: (i) Images are extrapolated beyond their original boundaries, hoping that the extrapolated areas will cover the gaps between them. This extrapolation becomes more blurred as we move away from the original image. (ii) The extrapolated images are aligned and their relative positions recovered. (iii) The gaps between the images are inpainted to create a seamless mosaic image.
{"title":"Alignment and mosaicing of non-overlapping images","authors":"Y. Poleg, Shmuel Peleg","doi":"10.1109/ICCPhot.2012.6215214","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215214","url":null,"abstract":"Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of overlap for alignment and for seamless stitching. Without image overlap current methods are helpless, and this is the case we address in this paper. So if a traveler wants to create a panoramic mosaic of a scene from pictures he has taken, but realizes back home that his pictures do not overlap, there is still hope. The proposed process has three stages: (i) Images are extrapolated beyond their original boundaries, hoping that the extrapolated areas will cover the gaps between them. This extrapolation becomes more blurred as we move away from the original image. (ii) The extrapolated images are aligned and their relative positions recovered. (iii) The gaps between the images are inpainted to create a seamless mosaic image.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132023784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-01DOI: 10.1109/ICCPhot.2012.6215221
Libin Sun, James Hays
In this paper, we present a highly data-driven approach to the task of single image super-resolution. Super-resolution is a challenging problem due to its massively under-constrained nature - for any low-resolution input there are numerous high-resolution possibilities. Our key observation is that, even with extremely low-res input images, we can use global scene descriptors and Internet-scale image databases to find similar scenes which provide ideal example textures to constrain the image upsampling problem. We quantitatively show that the statistics of scene matches are more predictive than internal image statistics for the super-resolution task. Finally, we build on recent patch-based texture transfer techniques to hallucinate texture detail and compare our super-resolution with other recent methods.
{"title":"Super-resolution from internet-scale scene matching","authors":"Libin Sun, James Hays","doi":"10.1109/ICCPhot.2012.6215221","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215221","url":null,"abstract":"In this paper, we present a highly data-driven approach to the task of single image super-resolution. Super-resolution is a challenging problem due to its massively under-constrained nature - for any low-resolution input there are numerous high-resolution possibilities. Our key observation is that, even with extremely low-res input images, we can use global scene descriptors and Internet-scale image databases to find similar scenes which provide ideal example textures to constrain the image upsampling problem. We quantitatively show that the statistics of scene matches are more predictive than internal image statistics for the super-resolution task. Finally, we build on recent patch-based texture transfer techniques to hallucinate texture detail and compare our super-resolution with other recent methods.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125844442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-04-01DOI: 10.1109/ICCPhot.2012.6215223
Xunyu Pan, Xing Zhang, Siwei Lyu
Image splicing is a simple and common image tampering operation, where a selected region from an image is pasted into another image with the aim to change its content. In this paper, based on the fact that images from different origins tend to have different amount of noise introduced by the sensors or post-processing steps, we describe an effective method to expose image splicing by detecting inconsistencies in local noise variances. Our method estimates local noise variances based on an observation that kurtosis values of natural images in band-pass filtered domains tend to concentrate around a constant value, and is accelerated by the use of integral image. We demonstrate the efficacy and robustness of our method based on several sets of forged images generated with image splicing.
{"title":"Exposing image splicing with inconsistent local noise variances","authors":"Xunyu Pan, Xing Zhang, Siwei Lyu","doi":"10.1109/ICCPhot.2012.6215223","DOIUrl":"https://doi.org/10.1109/ICCPhot.2012.6215223","url":null,"abstract":"Image splicing is a simple and common image tampering operation, where a selected region from an image is pasted into another image with the aim to change its content. In this paper, based on the fact that images from different origins tend to have different amount of noise introduced by the sensors or post-processing steps, we describe an effective method to expose image splicing by detecting inconsistencies in local noise variances. Our method estimates local noise variances based on an observation that kurtosis values of natural images in band-pass filtered domains tend to concentrate around a constant value, and is accelerated by the use of integral image. We demonstrate the efficacy and robustness of our method based on several sets of forged images generated with image splicing.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123171098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}