首页 > 最新文献

2007 IEEE International Conference on Image Processing最新文献

英文 中文
Steganalyzing Texture Images 隐写分析纹理图像
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379115
Chunhua Chen, Y. Shi, Guorong Xuan
A texture image is of noisy nature in its spatial representation. As a result, the data hidden in texture images, in particular in raw texture images, are hard to detect with current steganalytic methods. We propose an effective universal steganalyzer in this paper, which combines features, i.e., statistical moments of 1-D and 2-D characteristic functions extracted from the spatial representation and the block discrete cosine transform (BDCT) representations (with a set of different block sizes) of a given test image. This novel scheme can greatly improve the capability of attacking steganographic methods applied to texture images. In addition, it is shown that this scheme can be used as an effective universal steganalyzer for both texture and non-texture images.
纹理图像在空间表现上具有噪声性质。因此,现有的隐写分析方法很难检测到隐藏在纹理图像中的数据,特别是原始纹理图像中的数据。本文提出了一种有效的通用隐写分析仪,它结合了从给定测试图像的空间表示和块离散余弦变换(BDCT)表示(具有一组不同块大小)中提取的特征,即1-D和2-D特征函数的统计矩。该方法可以大大提高对纹理图像隐写方法的攻击能力。此外,该方案可作为一种有效的通用隐写分析器,适用于纹理和非纹理图像。
{"title":"Steganalyzing Texture Images","authors":"Chunhua Chen, Y. Shi, Guorong Xuan","doi":"10.1109/ICIP.2007.4379115","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379115","url":null,"abstract":"A texture image is of noisy nature in its spatial representation. As a result, the data hidden in texture images, in particular in raw texture images, are hard to detect with current steganalytic methods. We propose an effective universal steganalyzer in this paper, which combines features, i.e., statistical moments of 1-D and 2-D characteristic functions extracted from the spatial representation and the block discrete cosine transform (BDCT) representations (with a set of different block sizes) of a given test image. This novel scheme can greatly improve the capability of attacking steganographic methods applied to texture images. In addition, it is shown that this scheme can be used as an effective universal steganalyzer for both texture and non-texture images.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132057181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Image Stabilization Based on Fusing the Visual Information in Differently Exposed Images 基于不同曝光图像视觉信息融合的图像稳定
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378905
M. Tico, Markku Vehviläinen
The objective of image stabilization is to prevent or remove the motion blur degradation from images. We introduce a new approach to image stabilization based on combining information available in two differently exposed images of the same scene. In addition to the image normally captured by the system, with an exposure time determined by the illumination conditions, a very shortly exposed image is also acquired. The difference between the exposure times of the two images determines differences in their degradations which are exploited in order to recover the original image of the scene. We formulate the problem as a maximum a posteriori (MAP) estimation based on the degradation models of the two observed images, as well as by imposing an edge-preserving image prior. The proposed method is demonstrated through a series of simulation experiments, and visual examples on natural images.
图像稳定的目的是防止或消除图像的运动模糊退化。我们介绍了一种基于结合同一场景的两张不同曝光图像中可用信息的图像稳定新方法。除了通常由系统捕获的图像外,在由照明条件决定的曝光时间下,还可以获得非常短的曝光图像。两幅图像的曝光时间的差异决定了它们的退化程度的差异,这些差异被用来恢复场景的原始图像。我们将问题表述为基于两个观测图像的退化模型的最大后验(MAP)估计,以及通过施加边缘保持图像先验。通过一系列的仿真实验和自然图像的视觉实例验证了该方法的有效性。
{"title":"Image Stabilization Based on Fusing the Visual Information in Differently Exposed Images","authors":"M. Tico, Markku Vehviläinen","doi":"10.1109/ICIP.2007.4378905","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378905","url":null,"abstract":"The objective of image stabilization is to prevent or remove the motion blur degradation from images. We introduce a new approach to image stabilization based on combining information available in two differently exposed images of the same scene. In addition to the image normally captured by the system, with an exposure time determined by the illumination conditions, a very shortly exposed image is also acquired. The difference between the exposure times of the two images determines differences in their degradations which are exploited in order to recover the original image of the scene. We formulate the problem as a maximum a posteriori (MAP) estimation based on the degradation models of the two observed images, as well as by imposing an edge-preserving image prior. The proposed method is demonstrated through a series of simulation experiments, and visual examples on natural images.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132104289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Complexity Control for Real-Time Video Coding 实时视频编码的复杂性控制
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378895
E. Akyol, D. Mukherjee, Yuxin Liu
A methodology for complexity scalable video encoding and complexity control within the framework of the H.264/AVC video encoder is presented. To yield good rate-distortion performance under strict complexity/time constraints for instance in real-time communication, a framework for optimal complexity allocation at the macroblock level is necessary. We developed a macroblock level fast motion estimation based complexity scalable motion/mode search algorithm where the complexity is adapted jointly by parameters that determine the aggressiveness of an early stop criteria, the number of ordered modes searched, and the accuracy of motion estimation steps for the INTER modes. Next, these complexity parameters are adapted per macroblock based on a control loop to approximately satisfy an encoding frame rate target. The optimal manner of adapting the parameters is derived from prior training. Results using the developed scalable complexity H.264/AVC encoder demonstrate the benefit of adaptive complexity allocation over uniform complexity scaling.
提出了一种在H.264/AVC视频编码器框架下复杂度可扩展的视频编码和复杂度控制方法。为了在严格的复杂性/时间限制下产生良好的率失真性能,例如在实时通信中,需要一个在宏块级别上进行最佳复杂性分配的框架。我们开发了一种基于宏块级快速运动估计的复杂性可扩展运动/模式搜索算法,其中复杂性由决定早期停止标准的侵略性、有序模式搜索的数量和INTER模式运动估计步骤的准确性的参数共同适应。然后,根据控制循环对每个宏块调整这些复杂度参数,以近似满足编码帧率目标。通过先验训练得到参数的最优自适应方式。使用开发的可扩展复杂度H.264/AVC编码器的结果表明,自适应复杂度分配优于统一复杂度缩放。
{"title":"Complexity Control for Real-Time Video Coding","authors":"E. Akyol, D. Mukherjee, Yuxin Liu","doi":"10.1109/ICIP.2007.4378895","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378895","url":null,"abstract":"A methodology for complexity scalable video encoding and complexity control within the framework of the H.264/AVC video encoder is presented. To yield good rate-distortion performance under strict complexity/time constraints for instance in real-time communication, a framework for optimal complexity allocation at the macroblock level is necessary. We developed a macroblock level fast motion estimation based complexity scalable motion/mode search algorithm where the complexity is adapted jointly by parameters that determine the aggressiveness of an early stop criteria, the number of ordered modes searched, and the accuracy of motion estimation steps for the INTER modes. Next, these complexity parameters are adapted per macroblock based on a control loop to approximately satisfy an encoding frame rate target. The optimal manner of adapting the parameters is derived from prior training. Results using the developed scalable complexity H.264/AVC encoder demonstrate the benefit of adaptive complexity allocation over uniform complexity scaling.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132192617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Image Denoising with Nonparametric Hidden Markov Trees 基于非参数隐马尔可夫树的图像去噪
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379261
Jyri J. Kivinen, Erik B. Sudderth, Michael I. Jordan
We develop a hierarchical, nonparametric statistical model for wavelet representations of natural images. Extending previous work on Gaussian scale mixtures, wavelet coefficients are marginally distributed according to infinite, Dirichlet process mixtures. A hidden Markov tree is then used to couple the mixture assignments at neighboring nodes. Via a Monte Carlo learning algorithm, the resulting hierarchical Dirichlet process hidden Markov tree (HDP-HMT) model automatically adapts to the complexity of different images and wavelet bases. Image denoising results demonstrate the effectiveness of this learning process.
我们开发了一个层次,非参数统计模型的小波表示的自然图像。扩展了先前关于高斯尺度混合物的工作,根据无限的狄利克雷过程混合物,小波系数得到了边际分布。然后使用隐马尔可夫树对相邻节点的混合分配进行耦合。通过蒙特卡罗学习算法,得到的层次Dirichlet过程隐马尔可夫树(HDP-HMT)模型自动适应不同图像和小波基的复杂性。图像去噪结果证明了该学习过程的有效性。
{"title":"Image Denoising with Nonparametric Hidden Markov Trees","authors":"Jyri J. Kivinen, Erik B. Sudderth, Michael I. Jordan","doi":"10.1109/ICIP.2007.4379261","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379261","url":null,"abstract":"We develop a hierarchical, nonparametric statistical model for wavelet representations of natural images. Extending previous work on Gaussian scale mixtures, wavelet coefficients are marginally distributed according to infinite, Dirichlet process mixtures. A hidden Markov tree is then used to couple the mixture assignments at neighboring nodes. Via a Monte Carlo learning algorithm, the resulting hierarchical Dirichlet process hidden Markov tree (HDP-HMT) model automatically adapts to the complexity of different images and wavelet bases. Image denoising results demonstrate the effectiveness of this learning process.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130196490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Enhanced Quality Scalability for JPEG2000 Code-Streams by the Characterization of the Rate-Distortion Slope 通过率失真斜率的表征增强JPEG2000码流的质量可扩展性
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379168
Francesc Aulí Llinàs, J. Serra-Sagristà, Joan Bartrina-Rapesta, J. L. Monteagudo-Pereira
Quality scalability is a fundamental feature of JPEG2000, achieved through the use of quality layers. Two points, related with the use of quality layers, may need to be addressed when dealing with JPEG-2000 code-streams: 1) the lack of quality scalability of single quality layer code-streams, and 2) the non rate-distortion optimality of windows of interest transmission. This paper introduces a new rate control method that can be applied to already encoded code-streams, addressing these two points. Its main key-feature is a novel characterization that can fairly estimate the rate-distortion slope of the coding passes of code-blocks without using any measure based on the original image or related with the encoding process. Experimental results suggest that the proposed method is able to supply quality scalability to already encoded code-streams achieving a near-optimal coding performance. The low computational costs of the method makes it suitable for use in interactive transmissions.
质量可扩展性是JPEG2000的一个基本特征,通过使用质量层来实现。在处理JPEG-2000码流时,可能需要解决与质量层的使用相关的两个问题:1)缺乏单个质量层码流的质量可扩展性,以及2)感兴趣传输窗口的非速率失真最优性。本文介绍了一种新的速率控制方法,可以应用于已经编码的码流,解决了这两个问题。它的主要关键特征是一种新颖的表征,可以在不使用基于原始图像或与编码过程相关的任何度量的情况下,公平地估计码块编码通道的率失真斜率。实验结果表明,该方法能够为已经编码的码流提供高质量的可扩展性,实现近乎最佳的编码性能。该方法的低计算成本使其适合用于交互式传输。
{"title":"Enhanced Quality Scalability for JPEG2000 Code-Streams by the Characterization of the Rate-Distortion Slope","authors":"Francesc Aulí Llinàs, J. Serra-Sagristà, Joan Bartrina-Rapesta, J. L. Monteagudo-Pereira","doi":"10.1109/ICIP.2007.4379168","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379168","url":null,"abstract":"Quality scalability is a fundamental feature of JPEG2000, achieved through the use of quality layers. Two points, related with the use of quality layers, may need to be addressed when dealing with JPEG-2000 code-streams: 1) the lack of quality scalability of single quality layer code-streams, and 2) the non rate-distortion optimality of windows of interest transmission. This paper introduces a new rate control method that can be applied to already encoded code-streams, addressing these two points. Its main key-feature is a novel characterization that can fairly estimate the rate-distortion slope of the coding passes of code-blocks without using any measure based on the original image or related with the encoding process. Experimental results suggest that the proposed method is able to supply quality scalability to already encoded code-streams achieving a near-optimal coding performance. The low computational costs of the method makes it suitable for use in interactive transmissions.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130216360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Object Recognition by Learning Informative, Biologically Inspired Visual Features 通过学习信息丰富的、受生物学启发的视觉特征来识别对象
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378921
Yang Wu, Nanning Zheng, Qubo You, S. Du
This paper presents a novel, effective way to improve the object recognition performance of a biologically-motivated model by learning informative visual features. The original model has an obvious bottleneck when learning features. Therefore, we propose a circumspect algorithm to solve this problem. First, a novel information factor was designed to find the most informative feature for each image, and then complementary features were selected based on additional information. Finally, an intra-class clustering strategy was used to select the most typical features for each category. By integrating two other improvements, our algorithm performs better than any other system so far based on the same model.
本文提出了一种新的、有效的方法,通过学习信息视觉特征来提高生物动机模型的目标识别性能。原始模型在学习特征时存在明显的瓶颈。因此,我们提出了一种谨慎的算法来解决这个问题。首先设计一个新的信息因子,为每张图像寻找信息量最大的特征,然后根据附加信息选择互补特征。最后,使用类内聚类策略为每个类别选择最典型的特征。通过整合其他两个改进,我们的算法比目前基于相同模型的任何其他系统都表现得更好。
{"title":"Object Recognition by Learning Informative, Biologically Inspired Visual Features","authors":"Yang Wu, Nanning Zheng, Qubo You, S. Du","doi":"10.1109/ICIP.2007.4378921","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378921","url":null,"abstract":"This paper presents a novel, effective way to improve the object recognition performance of a biologically-motivated model by learning informative visual features. The original model has an obvious bottleneck when learning features. Therefore, we propose a circumspect algorithm to solve this problem. First, a novel information factor was designed to find the most informative feature for each image, and then complementary features were selected based on additional information. Finally, an intra-class clustering strategy was used to select the most typical features for each category. By integrating two other improvements, our algorithm performs better than any other system so far based on the same model.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130293363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
High Dynamic Range Image and Video Compression - Fidelity Matching Human Visual Performance 高动态范围图像和视频压缩-保真度匹配人类的视觉表现
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378878
Rafał K. Mantiuk, Grzegorz Krawczyk, K. Myszkowski, H. Seidel
Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to fully exploit capabilities of new display devices. High dynamic range (HDR) image and video formats encode the full visible range of luminance and color gamut, thus offering ultimate fidelity, limited only by the capabilities of the human eye and not by any existing technology. In this paper we demonstrate how existing image and video compression standards can be extended to encode HDR content efficiently. This is achieved by a custom color space for encoding HDR pixel values that is derived from the visual performance data. We also demonstrate how HDR image and video compression can be designed so that it is backward compatible with existing formats.
今天存储的绝大多数数字图像和视频材料只能捕获人眼可见的视觉信息的一小部分,并且不能提供足够的质量来充分利用新的显示设备的功能。高动态范围(HDR)图像和视频格式编码的亮度和色域的整个可见范围,从而提供最终的保真度,只有人眼的能力限制,而不是任何现有的技术。在本文中,我们演示了如何扩展现有的图像和视频压缩标准来有效地编码HDR内容。这是通过自定义色彩空间实现的,用于编码来自视觉性能数据的HDR像素值。我们还演示了如何设计HDR图像和视频压缩,使其与现有格式向后兼容。
{"title":"High Dynamic Range Image and Video Compression - Fidelity Matching Human Visual Performance","authors":"Rafał K. Mantiuk, Grzegorz Krawczyk, K. Myszkowski, H. Seidel","doi":"10.1109/ICIP.2007.4378878","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378878","url":null,"abstract":"Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to fully exploit capabilities of new display devices. High dynamic range (HDR) image and video formats encode the full visible range of luminance and color gamut, thus offering ultimate fidelity, limited only by the capabilities of the human eye and not by any existing technology. In this paper we demonstrate how existing image and video compression standards can be extended to encode HDR content efficiently. This is achieved by a custom color space for encoding HDR pixel values that is derived from the visual performance data. We also demonstrate how HDR image and video compression can be designed so that it is backward compatible with existing formats.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130463663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Interferometric Synthetic Aperture Microscopy: Physics-Based Image Reconstruction from Optical Coherence Tomography Data 干涉合成孔径显微镜:基于物理的光学相干层析成像数据图像重建
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379975
B. Davis, T. Ralston, D. Marks, S. Boppart, P. Carney
Optical coherence tomography (OCT) is an optical ranging technique analogous to radar - detection of back-scattered light produces a signal that is temporally localized at times-of-flight corresponding to the location of scatterers in the object. However the interferometric collection technique used in OCT allows, in principle, the coherent collection of data, i.e. amplitude and phase information can be extracted. Interferometric synthetic aperture microscopy (ISAM) adds phase-stable data collection to OCT instrumentation and employs physics-based processing analogous to that used in synthetic aperture radar (SAR). That is, the complex nature of the coherent data is exploited to give gains in image quality. Specifically, diffraction-limited resolution is achieved throughout the sample, not just within focal volume of the illuminating field. Simulated and experimental verifications of this effect are presented. ISAM's computational focusing obviates the trade-off between lateral resolution and depth-of-focus seen in traditional OCT.
光学相干层析成像(OCT)是一种类似于雷达的光学测距技术,对后散射光进行探测,产生的信号在与物体中散射体的位置相对应的飞行时间被暂时定位。然而,OCT中使用的干涉采集技术原则上允许相干的数据采集,即可以提取振幅和相位信息。干涉合成孔径显微镜(ISAM)为OCT仪器增加了相位稳定的数据收集,并采用类似于合成孔径雷达(SAR)中使用的基于物理的处理。也就是说,利用相干数据的复杂性来提高图像质量。具体来说,在整个样品中实现衍射限制分辨率,而不仅仅是在照明场的焦点体积内。对这一效应进行了仿真和实验验证。ISAM的计算聚焦消除了传统OCT中横向分辨率和聚焦深度之间的权衡。
{"title":"Interferometric Synthetic Aperture Microscopy: Physics-Based Image Reconstruction from Optical Coherence Tomography Data","authors":"B. Davis, T. Ralston, D. Marks, S. Boppart, P. Carney","doi":"10.1109/ICIP.2007.4379975","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379975","url":null,"abstract":"Optical coherence tomography (OCT) is an optical ranging technique analogous to radar - detection of back-scattered light produces a signal that is temporally localized at times-of-flight corresponding to the location of scatterers in the object. However the interferometric collection technique used in OCT allows, in principle, the coherent collection of data, i.e. amplitude and phase information can be extracted. Interferometric synthetic aperture microscopy (ISAM) adds phase-stable data collection to OCT instrumentation and employs physics-based processing analogous to that used in synthetic aperture radar (SAR). That is, the complex nature of the coherent data is exploited to give gains in image quality. Specifically, diffraction-limited resolution is achieved throughout the sample, not just within focal volume of the illuminating field. Simulated and experimental verifications of this effect are presented. ISAM's computational focusing obviates the trade-off between lateral resolution and depth-of-focus seen in traditional OCT.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134064034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optimal Denoising in Redundant Bases 冗余基的最优去噪
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379259
M. Raphan, Eero P. Simoncelli
Image denoising methods are often based on estimators chosen to minimize mean squared error (MSE) within the sub-bands of a multi-scale decomposition. But this does not guarantee optimal MSE performance in the image domain, unless the decomposition is orthonormal. We prove that despite this suboptimality, the expected image-domain MSE resulting from a representation that is made redundant through spatial replication of basis functions (e.g., cycle-spinning) is less than or equal to that resulting from the original non-redundant representation. We also develop an extension of Stein's unbiased risk estimator (SURE) that allows minimization of the image-domain MSE for estimators that operate on subbands of a redundant decomposition. We implement an example, jointly optimizing the parameters of scalar estimators applied to each subband of an overcomplete representation, and demonstrate substantial MSE improvement over the sub-optimal application of SURE within individual subbands.
图像去噪方法通常基于在多尺度分解的子带内选择最小化均方误差(MSE)的估计器。但这并不能保证在图像域的最佳MSE性能,除非分解是标准正交的。我们证明,尽管存在这种次优性,但通过基函数的空间复制(例如,循环旋转)使表示冗余的期望图像域MSE小于或等于原始非冗余表示产生的期望图像域MSE。我们还开发了Stein的无偏风险估计器(SURE)的扩展,它允许在冗余分解的子带上操作的估计器的图像域MSE最小化。我们实现了一个例子,共同优化了应用于过完备表示的每个子带的标量估计器的参数,并证明了在单个子带内应用SURE的次优MSE的显著改进。
{"title":"Optimal Denoising in Redundant Bases","authors":"M. Raphan, Eero P. Simoncelli","doi":"10.1109/ICIP.2007.4379259","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379259","url":null,"abstract":"Image denoising methods are often based on estimators chosen to minimize mean squared error (MSE) within the sub-bands of a multi-scale decomposition. But this does not guarantee optimal MSE performance in the image domain, unless the decomposition is orthonormal. We prove that despite this suboptimality, the expected image-domain MSE resulting from a representation that is made redundant through spatial replication of basis functions (e.g., cycle-spinning) is less than or equal to that resulting from the original non-redundant representation. We also develop an extension of Stein's unbiased risk estimator (SURE) that allows minimization of the image-domain MSE for estimators that operate on subbands of a redundant decomposition. We implement an example, jointly optimizing the parameters of scalar estimators applied to each subband of an overcomplete representation, and demonstrate substantial MSE improvement over the sub-optimal application of SURE within individual subbands.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134404512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
High Dimension Lattice Vector Quantizer Design for Generalized Gaussian Distributions 广义高斯分布的高维点阵矢量量化器设计
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379985
L. H. Fonteles, M. Antonini
LVQ is a simple but powerful tool for vector quantization and can be viewed as a vector generalization of uniform scalar quantization. Like VQ, LVQ is able to take into account spatial dependencies between adjacent pixels as well as to take advantage of the n-dimensional space filling gain. However, the design of a lattice vector quantizer is not trivial particularly when one wants to use vectors with high dimensions. Indeed, using high dimensions involves lattice codebooks with a huge population that makes indexing difficult. On the other hand, in the framework of wavelet transform, a bit allocation across the subbands must be done in an optimal way. The use of VQ and the lack of non asymptotical distortion-rate models for this kind of quantizers make this operation difficult. In this work we focus on the problem of efficient indexing and optimal bit allocation and propose efficient solutions.
LVQ是一个简单但功能强大的矢量量化工具,可以看作是均匀标量量化的矢量推广。与VQ一样,LVQ能够考虑相邻像素之间的空间依赖关系,并利用n维空间填充增益。然而,晶格矢量量化器的设计并不简单,特别是当人们想要使用高维矢量时。事实上,使用高维涉及到具有大量人口的晶格码本,这使得索引变得困难。另一方面,在小波变换的框架下,必须以最优的方式进行子带间的位分配。使用VQ和缺乏非渐近失真率模型使得这类量化器的操作变得困难。本文主要研究了有效索引和最优位分配问题,并提出了有效的解决方案。
{"title":"High Dimension Lattice Vector Quantizer Design for Generalized Gaussian Distributions","authors":"L. H. Fonteles, M. Antonini","doi":"10.1109/ICIP.2007.4379985","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379985","url":null,"abstract":"LVQ is a simple but powerful tool for vector quantization and can be viewed as a vector generalization of uniform scalar quantization. Like VQ, LVQ is able to take into account spatial dependencies between adjacent pixels as well as to take advantage of the n-dimensional space filling gain. However, the design of a lattice vector quantizer is not trivial particularly when one wants to use vectors with high dimensions. Indeed, using high dimensions involves lattice codebooks with a huge population that makes indexing difficult. On the other hand, in the framework of wavelet transform, a bit allocation across the subbands must be done in an optimal way. The use of VQ and the lack of non asymptotical distortion-rate models for this kind of quantizers make this operation difficult. In this work we focus on the problem of efficient indexing and optimal bit allocation and propose efficient solutions.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134539181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2007 IEEE International Conference on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1