{"title":"深度编码形状从焦点","authors":"Martin Lenz, David Ferstl, M. Rüther, H. Bischof","doi":"10.1109/ICCPhot.2012.6215218","DOIUrl":null,"url":null,"abstract":"We present a novel shape from focus method for high- speed shape reconstruction in optical microscopy. While the traditional shape from focus approach heavily depends on presence of surface texture, and requires a considerable amount of measurement time, our method is able to perform reconstruction from only two images. Our method relies the rapid projection of a binary pattern sequence, while object is continuously moved through the camera focus range and a single image is continuously exposed. Deconvolution of the integral image allows a direct decoding of binary pattern and its associated depth. Experiments a synthetic dataset and on real scenes show that a depth map can be reconstructed at only 3% of memory costs and fraction of the computational effort compared with traditional shape from focus.","PeriodicalId":169984,"journal":{"name":"2012 IEEE International Conference on Computational Photography (ICCP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Depth coded shape from focus\",\"authors\":\"Martin Lenz, David Ferstl, M. Rüther, H. Bischof\",\"doi\":\"10.1109/ICCPhot.2012.6215218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a novel shape from focus method for high- speed shape reconstruction in optical microscopy. While the traditional shape from focus approach heavily depends on presence of surface texture, and requires a considerable amount of measurement time, our method is able to perform reconstruction from only two images. Our method relies the rapid projection of a binary pattern sequence, while object is continuously moved through the camera focus range and a single image is continuously exposed. Deconvolution of the integral image allows a direct decoding of binary pattern and its associated depth. Experiments a synthetic dataset and on real scenes show that a depth map can be reconstructed at only 3% of memory costs and fraction of the computational effort compared with traditional shape from focus.\",\"PeriodicalId\":169984,\"journal\":{\"name\":\"2012 IEEE International Conference on Computational Photography (ICCP)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-04-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Conference on Computational Photography (ICCP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCPhot.2012.6215218\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Conference on Computational Photography (ICCP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCPhot.2012.6215218","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We present a novel shape from focus method for high- speed shape reconstruction in optical microscopy. While the traditional shape from focus approach heavily depends on presence of surface texture, and requires a considerable amount of measurement time, our method is able to perform reconstruction from only two images. Our method relies the rapid projection of a binary pattern sequence, while object is continuously moved through the camera focus range and a single image is continuously exposed. Deconvolution of the integral image allows a direct decoding of binary pattern and its associated depth. Experiments a synthetic dataset and on real scenes show that a depth map can be reconstructed at only 3% of memory costs and fraction of the computational effort compared with traditional shape from focus.