{"title":"A gentle introduction to coded computational photography","authors":"Horacio E. Fortunato, M. M. O. Neto","doi":"10.1109/SIBGRAPI-T.2011.13","DOIUrl":null,"url":null,"abstract":"Computational photography tries to expand the concept of traditional photography (a static two dimensional projection of a scene) using state-of-the-art technology. While this can be achieved by combining information from multiple conventional pictures, a more interesting challenge consists in encoding and recovering additional information from one (or more) image(s). Since a photograph results from the convolution of scene radiance with the camera's aperture (integrated over the exposure time), researchers have designed apertures with certain desirable spectral properties to facilitate the deconvolution process and, consequently, the recovery of scene information. Images captured using these so-called coded apertures can be deconvolved to create all-in-focus images, and to estimate scene depth, among other things. Images of moving objects acquired using a coded exposure (obtained by switching between a fully-closed and a fully-opened aperture, according to a predefined pattern) can be deconvolved to reduce motion blur. The notion of encoding information during image acquisition opens up new and exciting possibilities, which researchers have just begun to explore. This article provides a gentle introduction to coded photography, focusing on the fundamental concepts and essential mathematical tools.","PeriodicalId":131363,"journal":{"name":"2011 24th SIBGRAPI Conference on Graphics, Patterns, and Images Tutorials","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 24th SIBGRAPI Conference on Graphics, Patterns, and Images Tutorials","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIBGRAPI-T.2011.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Computational photography tries to expand the concept of traditional photography (a static two dimensional projection of a scene) using state-of-the-art technology. While this can be achieved by combining information from multiple conventional pictures, a more interesting challenge consists in encoding and recovering additional information from one (or more) image(s). Since a photograph results from the convolution of scene radiance with the camera's aperture (integrated over the exposure time), researchers have designed apertures with certain desirable spectral properties to facilitate the deconvolution process and, consequently, the recovery of scene information. Images captured using these so-called coded apertures can be deconvolved to create all-in-focus images, and to estimate scene depth, among other things. Images of moving objects acquired using a coded exposure (obtained by switching between a fully-closed and a fully-opened aperture, according to a predefined pattern) can be deconvolved to reduce motion blur. The notion of encoding information during image acquisition opens up new and exciting possibilities, which researchers have just begun to explore. This article provides a gentle introduction to coded photography, focusing on the fundamental concepts and essential mathematical tools.