首页 > 最新文献

2007 IEEE International Conference on Image Processing最新文献

英文 中文
Block-Based Gradient Domain High Dynamic Range Compression Design for Real-Time Applications 实时应用的基于块的梯度域高动态范围压缩设计
Pub Date : 2007-12-12 DOI: 10.1109/ICIP.2007.4403041
Tsun-Hsien Wang, Wei-Ming Ke, Ding-Chuang Zwao, F. Chen, C. Chiu
Due to progress in high dynamic range (HDR) capture technologies, the HDR image or video display on conventional LCD devices has become an important topic. Many tone mapping algorithms are proposed for rendering HDR images on conventional displays, but intensive computation time makes them impractical for video applications. In this paper, we present a real-time block-based gradient domain HDR compression for image or video applications. The gradient domain HDR compression is selected as our tone mapping scheme for its ability to compress and preserve details. We divide one HDR image/frame into several equal blocks and process each by the modified gradient domain HDR compression. The gradients of smaller magnitudes are attenuated less in each block to maintain local contrast and thus expose details. By solving the Poisson equation on the attenuated gradient field block by block, we are able to reconstruct a low dynamic range image. A real-time Discrete Sine Transform (DST) architecture is proposed and developed to solve the Poisson equation. Our synthesis results show that our DST Poisson solver can run at 50 MHz clock and consume area of 9 mm2 under TSMC 0.18 um technology.
随着高动态范围(HDR)捕获技术的进步,HDR图像或视频在传统LCD设备上的显示已成为一个重要的课题。为了在传统显示器上呈现HDR图像,提出了许多色调映射算法,但由于计算量大,使得它们不适用于视频应用。在本文中,我们提出了一种实时的基于块的梯度域HDR压缩图像或视频应用。选择梯度域HDR压缩作为我们的色调映射方案,因为它具有压缩和保留细节的能力。我们将一幅HDR图像/帧分成几个相等的块,并通过改进的梯度域HDR压缩对每个块进行处理。较小幅度的梯度在每个块中衰减较小,以保持局部对比度,从而暴露细节。通过逐块求解衰减梯度场上的泊松方程,可以重建低动态范围图像。提出并开发了一种求解泊松方程的实时离散正弦变换(DST)结构。我们的合成结果表明,我们的DST泊松求解器在TSMC 0.18 um技术下可以在50 MHz时钟下运行,消耗面积为9 mm2。
{"title":"Block-Based Gradient Domain High Dynamic Range Compression Design for Real-Time Applications","authors":"Tsun-Hsien Wang, Wei-Ming Ke, Ding-Chuang Zwao, F. Chen, C. Chiu","doi":"10.1109/ICIP.2007.4403041","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4403041","url":null,"abstract":"Due to progress in high dynamic range (HDR) capture technologies, the HDR image or video display on conventional LCD devices has become an important topic. Many tone mapping algorithms are proposed for rendering HDR images on conventional displays, but intensive computation time makes them impractical for video applications. In this paper, we present a real-time block-based gradient domain HDR compression for image or video applications. The gradient domain HDR compression is selected as our tone mapping scheme for its ability to compress and preserve details. We divide one HDR image/frame into several equal blocks and process each by the modified gradient domain HDR compression. The gradients of smaller magnitudes are attenuated less in each block to maintain local contrast and thus expose details. By solving the Poisson equation on the attenuated gradient field block by block, we are able to reconstruct a low dynamic range image. A real-time Discrete Sine Transform (DST) architecture is proposed and developed to solve the Poisson equation. Our synthesis results show that our DST Poisson solver can run at 50 MHz clock and consume area of 9 mm2 under TSMC 0.18 um technology.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115615316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Graph-Cut Rate Distortion Algorithm for Contourlet-Based Image Compression 基于contourlet的图像压缩图切率失真算法
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379273
M. Trocan, B. Pesquet-Popescu, J. Fowler
The geometric features of images, such as edges, are difficult to represent. When a redundant transform is used for their extraction, the compression challenge is even more difficult. In this paper we present a new rate-distortion optimization algorithm based on graph theory that can encode efficiently the coefficients of a critically sampled, non-orthogonal or even redundant transform, like the contourlet decomposition. The basic idea is to construct a specialized graph such that its minimum cut minimizes the energy functional. We propose to apply this technique for rate-distortion Lagrangian optimization in subband image coding. The method yields good compression results compared to the state-of-art JPEG2000 codec, as well as a general improvement in visual quality.
图像的几何特征,如边缘,是难以表示的。当使用冗余变换提取它们时,压缩挑战就更加困难了。本文提出了一种新的基于图论的率失真优化算法,该算法可以有效地编码临界采样、非正交甚至冗余变换的系数,如contourlet分解。基本思想是构造一个专门的图,使其最小截点使能量泛函最小。我们提出将该技术应用于子带图像编码中的率失真拉格朗日优化。与最先进的JPEG2000编解码器相比,该方法产生了良好的压缩结果,并且在视觉质量方面得到了总体改善。
{"title":"Graph-Cut Rate Distortion Algorithm for Contourlet-Based Image Compression","authors":"M. Trocan, B. Pesquet-Popescu, J. Fowler","doi":"10.1109/ICIP.2007.4379273","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379273","url":null,"abstract":"The geometric features of images, such as edges, are difficult to represent. When a redundant transform is used for their extraction, the compression challenge is even more difficult. In this paper we present a new rate-distortion optimization algorithm based on graph theory that can encode efficiently the coefficients of a critically sampled, non-orthogonal or even redundant transform, like the contourlet decomposition. The basic idea is to construct a specialized graph such that its minimum cut minimizes the energy functional. We propose to apply this technique for rate-distortion Lagrangian optimization in subband image coding. The method yields good compression results compared to the state-of-art JPEG2000 codec, as well as a general improvement in visual quality.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Attack LSB Matching Steganography by Counting Alteration Rate of the Number of Neighbourhood Gray Levels 利用邻域灰度值变化率攻击LSB匹配隐写
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378976
Fangjun Huang, Bin Li, Jiwu Huang
In this paper, we propose a new method for attacking the LSB (least significant bit) matching based steganography. Different from the LSB substitution, the least two or more significant bit-planes of the cover image would be changed during the embedding in LSB matching steganography and thus the pairs of values do not exist in stego image. In our proposed method, we get an image by combining the least two significant bit-planes and divide it into 3x3 overlapped subimages. The subimages are grouped into four types, i.e. T 1,T 2, T 3 and T 4 according to the count of gray levels. Via embedding a random sequence by LSB matching and then computing the alteration rate of the number of elements in T 1, we find that normally the alteration rate is higher in cover image than in the corresponding stego image. This new finding is used as the discrimination rule in our method. Experimental results demonstrate that the proposed algorithm is efficient to detect the LSB matching stegonagraphy on uncompressed gray scale images.
在本文中,我们提出了一种攻击基于LSB(最低有效位)匹配的隐写的新方法。与LSB替换不同的是,LSB匹配隐写在嵌入过程中会改变封面图像中至少两个或两个以上的有效位平面,因此隐写图像中不存在值对。在我们提出的方法中,我们通过组合最小两个有效位平面并将其划分为3x3重叠的子图像来获得图像。将子图像按灰度数分为t1、t2、t3、t4四种类型。通过LSB匹配嵌入一个随机序列,然后计算t1中元素个数的变化率,我们发现通常情况下,覆盖图像的变化率要高于相应的隐进图像。这一新发现在我们的方法中被用作判别规则。实验结果表明,该算法能够有效地检测未压缩灰度图像上的LSB匹配字写。
{"title":"Attack LSB Matching Steganography by Counting Alteration Rate of the Number of Neighbourhood Gray Levels","authors":"Fangjun Huang, Bin Li, Jiwu Huang","doi":"10.1109/ICIP.2007.4378976","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378976","url":null,"abstract":"In this paper, we propose a new method for attacking the LSB (least significant bit) matching based steganography. Different from the LSB substitution, the least two or more significant bit-planes of the cover image would be changed during the embedding in LSB matching steganography and thus the pairs of values do not exist in stego image. In our proposed method, we get an image by combining the least two significant bit-planes and divide it into 3x3 overlapped subimages. The subimages are grouped into four types, i.e. T 1,T 2, T 3 and T 4 according to the count of gray levels. Via embedding a random sequence by LSB matching and then computing the alteration rate of the number of elements in T 1, we find that normally the alteration rate is higher in cover image than in the corresponding stego image. This new finding is used as the discrimination rule in our method. Experimental results demonstrate that the proposed algorithm is efficient to detect the LSB matching stegonagraphy on uncompressed gray scale images.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115396309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
Automatic Measures for Predicting Performance in Off-Line Signature 离线签名性能预测的自动度量方法
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378968
F. Alonso-Fernandez, M. Fairhurst, Julian Fierrez, J. Ortega-Garcia
Performance in terms of accuracy is one of the most important goal of a biometric system. Hence, having a measure which is able to predict the performance with respect to a particular sample of interest is specially useful, and can be exploited in a number of ways. In this paper, we present two automatic measures for predicting the performance in off-line signature verification. Results obtained on a sub-corpus of the MCYT signature database confirms a relationship between the proposed measures and system error rates measured in terms of equal error rate (EER), false acceptance rate (FAR) and false rejection rate (FRR).
准确度方面的性能是生物识别系统最重要的目标之一。因此,拥有一个能够预测特定感兴趣的样本的性能的度量是特别有用的,并且可以通过多种方式加以利用。在本文中,我们提出了两种预测离线签名验证性能的自动度量。在MCYT特征库的子语料库上获得的结果证实了所提出的度量与系统错误率之间的关系,该错误率以等错误率(EER)、误接受率(FAR)和误拒率(FRR)来衡量。
{"title":"Automatic Measures for Predicting Performance in Off-Line Signature","authors":"F. Alonso-Fernandez, M. Fairhurst, Julian Fierrez, J. Ortega-Garcia","doi":"10.1109/ICIP.2007.4378968","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378968","url":null,"abstract":"Performance in terms of accuracy is one of the most important goal of a biometric system. Hence, having a measure which is able to predict the performance with respect to a particular sample of interest is specially useful, and can be exploited in a number of ways. In this paper, we present two automatic measures for predicting the performance in off-line signature verification. Results obtained on a sub-corpus of the MCYT signature database confirms a relationship between the proposed measures and system error rates measured in terms of equal error rate (EER), false acceptance rate (FAR) and false rejection rate (FRR).","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Extension of Mutual Subspace Method for Low Dimensional Feature Projection 低维特征投影的互子空间扩展方法
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379189
D. Veljkovic, K. Robbins, D. Rubino, N. Hatsopoulos
Face recognition algorithms based on mutual subspace methods (MSM) map segmented faces to single points on a feature manifold and then apply manifold learning techniques to classify the results. This paper proposes a generic extension to MSM for analysis of features in high-throughput recordings. We apply this method to analyze short duration overlapping waves in synthetic data and multielectrode brain recordings. We compare different feature space topologies and projection techniques, including MDS, ISOMAP and Laplacian eigenmaps. Overall we find that ISOMAP shows the least sensitivity to noise and provides the best association between distance in feature space and Euclidean distance in projection space. For non-noisy data, Laplacian eigenmaps show the least sensitivity to feature space topology.
基于互子空间方法(MSM)的人脸识别算法将被分割的人脸映射到特征流形上的单个点,然后应用流形学习技术对结果进行分类。本文提出了对MSM的一种通用扩展,用于分析高吞吐量录音的特征。我们将此方法应用于分析合成数据和多电极脑记录中的短时间重叠波。我们比较了不同的特征空间拓扑和投影技术,包括MDS、ISOMAP和拉普拉斯特征映射。总体而言,我们发现ISOMAP对噪声的敏感性最低,并且在特征空间中的距离和投影空间中的欧几里得距离之间提供了最好的关联。对于无噪声数据,拉普拉斯特征映射对特征空间拓扑的敏感性最低。
{"title":"Extension of Mutual Subspace Method for Low Dimensional Feature Projection","authors":"D. Veljkovic, K. Robbins, D. Rubino, N. Hatsopoulos","doi":"10.1109/ICIP.2007.4379189","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379189","url":null,"abstract":"Face recognition algorithms based on mutual subspace methods (MSM) map segmented faces to single points on a feature manifold and then apply manifold learning techniques to classify the results. This paper proposes a generic extension to MSM for analysis of features in high-throughput recordings. We apply this method to analyze short duration overlapping waves in synthetic data and multielectrode brain recordings. We compare different feature space topologies and projection techniques, including MDS, ISOMAP and Laplacian eigenmaps. Overall we find that ISOMAP shows the least sensitivity to noise and provides the best association between distance in feature space and Euclidean distance in projection space. For non-noisy data, Laplacian eigenmaps show the least sensitivity to feature space topology.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115755382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Greedy Performance Driven Algorithm for Decision Fusion Learning 一种贪婪性能驱动的决策融合学习算法
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379512
D. Joshi, M. Naphade, A. Natsev
We propose a greedy performance driven algorithm for learning how to fuse across multiple classification and search systems. We assume a scenario when many such systems need to be fused to generate the final ranking. The algorithm is inspired from Ensemble Learning but takes that idea further for improving generalization capability. Fusion learning is applied to leverage text, visual and model based modalities for 2005 TRECVID query retrieval task. Experiments using the well established retrieval effectiveness measure of mean average precision reveal that our proposed algorithm improves over naive baseline (fusion with equal weights) as well as over Caruana's original algorithm (NACHOS) by 36% and 46% respectively.
我们提出了一种贪婪性能驱动算法,用于学习如何跨多个分类和搜索系统融合。我们假设需要融合许多这样的系统来生成最终排名。该算法受到集成学习的启发,但进一步提高了泛化能力。将融合学习应用于2005 TRECVID查询检索任务,利用文本、视觉和基于模型的模式。使用完善的平均精度检索有效性度量的实验表明,我们提出的算法比朴素基线(等权融合)和Caruana的原始算法(NACHOS)分别提高了36%和46%。
{"title":"A Greedy Performance Driven Algorithm for Decision Fusion Learning","authors":"D. Joshi, M. Naphade, A. Natsev","doi":"10.1109/ICIP.2007.4379512","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379512","url":null,"abstract":"We propose a greedy performance driven algorithm for learning how to fuse across multiple classification and search systems. We assume a scenario when many such systems need to be fused to generate the final ranking. The algorithm is inspired from Ensemble Learning but takes that idea further for improving generalization capability. Fusion learning is applied to leverage text, visual and model based modalities for 2005 TRECVID query retrieval task. Experiments using the well established retrieval effectiveness measure of mean average precision reveal that our proposed algorithm improves over naive baseline (fusion with equal weights) as well as over Caruana's original algorithm (NACHOS) by 36% and 46% respectively.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Structure Preserving Image Interpolation via Adaptive 2D Autoregressive Modeling 基于自适应二维自回归建模的保结构图像插值
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379987
Xiangjun Zhang, Xiaolin Wu
The performance of image interpolation depends on an image model that can adapt to nonstationary statistics of natural images when estimating the missing pixels. However, the construction of such an adaptive model needs the knowledge of every pixels that are absent. We resolve this dilemma by a new piecewise 2D autoregressive technique that builds the model and estimates the missing pixels jointly. This task is formulated as a non-linear optimization problem. Although computationally demanding, the new non-linear approach produces superior results than current methods in both PSNR and subjective visual quality. Moreover, in quest for a practical solution, we break the non-linear optimization problem into two subproblems of linear least-squares estimation. This linear approach proves very effective in our experiments.
图像插值的性能取决于在估计缺失像素时能够适应自然图像的非平稳统计量的图像模型。然而,这种自适应模型的构建需要每个缺失像素的知识。我们通过一种新的分段二维自回归技术来解决这一难题,该技术建立模型并联合估计缺失像素。这个任务被表述为一个非线性优化问题。尽管计算要求很高,但新的非线性方法在PSNR和主观视觉质量方面都优于现有方法。此外,为了寻求实际的解决方案,我们将非线性优化问题分解为线性最小二乘估计的两个子问题。这种线性方法在我们的实验中证明是非常有效的。
{"title":"Structure Preserving Image Interpolation via Adaptive 2D Autoregressive Modeling","authors":"Xiangjun Zhang, Xiaolin Wu","doi":"10.1109/ICIP.2007.4379987","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379987","url":null,"abstract":"The performance of image interpolation depends on an image model that can adapt to nonstationary statistics of natural images when estimating the missing pixels. However, the construction of such an adaptive model needs the knowledge of every pixels that are absent. We resolve this dilemma by a new piecewise 2D autoregressive technique that builds the model and estimates the missing pixels jointly. This task is formulated as a non-linear optimization problem. Although computationally demanding, the new non-linear approach produces superior results than current methods in both PSNR and subjective visual quality. Moreover, in quest for a practical solution, we break the non-linear optimization problem into two subproblems of linear least-squares estimation. This linear approach proves very effective in our experiments.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116639314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
High Resolution Image Reconstruction in Shape from Focus 高分辨率图像重建的形状从焦点
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379094
R. R. Sahay, A. Rajagopalan
In the Shape from Focus (SFF) method, a sequence of images of a 3D object is captured for computing its depth profile. However, it is useful in several applications to also derive a high resolution focused image of the 3D object. Given the space-variantly blurred frames and the depth map, we propose a method to optimally estimate a high resolution image of the object within the SFF framework.
在聚焦形状(SFF)方法中,捕获3D物体的一系列图像以计算其深度轮廓。然而,在一些应用中,它也可以用于派生3D对象的高分辨率聚焦图像。考虑到空间变化的模糊帧和深度图,我们提出了一种在SFF框架内最优估计目标高分辨率图像的方法。
{"title":"High Resolution Image Reconstruction in Shape from Focus","authors":"R. R. Sahay, A. Rajagopalan","doi":"10.1109/ICIP.2007.4379094","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379094","url":null,"abstract":"In the Shape from Focus (SFF) method, a sequence of images of a 3D object is captured for computing its depth profile. However, it is useful in several applications to also derive a high resolution focused image of the 3D object. Given the space-variantly blurred frames and the depth map, we propose a method to optimally estimate a high resolution image of the object within the SFF framework.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117154293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
New Features to Identify Computer Generated Images 识别计算机生成图像的新功能
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4380047
A. Dirik, Sevinc Bayram, H. Sencar, N. Memon
Discrimination of computer generated images from real images is becoming more and more important. In this paper, we propose the use of new features to distinguish computer generated images from real images. The proposed features are based on the differences in the acquisition process of images. More specifically, traces of demosaicking and chromatic aberration are used to differentiate computer generated images from digital camera images. It is observed that the former features perform very well on high quality images, whereas the latter features perform consistently across a wide range of compression values. The experimental results show that proposed features are capable of improving the accuracy of the state-of-the-art techniques.
计算机生成的图像与真实图像的区分变得越来越重要。在本文中,我们提出使用新的特征来区分计算机生成的图像和真实图像。所提出的特征是基于图像获取过程的差异。更具体地说,去马赛克和色差的痕迹被用来区分计算机生成的图像和数码相机图像。可以观察到,前一种特征在高质量图像上表现非常好,而后一种特征在大范围的压缩值上表现一致。实验结果表明,所提出的特征能够提高当前技术的精度。
{"title":"New Features to Identify Computer Generated Images","authors":"A. Dirik, Sevinc Bayram, H. Sencar, N. Memon","doi":"10.1109/ICIP.2007.4380047","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4380047","url":null,"abstract":"Discrimination of computer generated images from real images is becoming more and more important. In this paper, we propose the use of new features to distinguish computer generated images from real images. The proposed features are based on the differences in the acquisition process of images. More specifically, traces of demosaicking and chromatic aberration are used to differentiate computer generated images from digital camera images. It is observed that the former features perform very well on high quality images, whereas the latter features perform consistently across a wide range of compression values. The experimental results show that proposed features are capable of improving the accuracy of the state-of-the-art techniques.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127167893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
Optimal Joint Source-Channel Coding using Unequal Error Protection for the Scalable Extension of H.264/MPEG-4 AVC 基于不等错误保护的H.264/MPEG-4 AVC可扩展的最优联合源信道编码
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379635
M. Stoufs, A. Munteanu, P. Schelkens, J. Cornelis
This paper proposes an optimized joint source-channel coding methodology with unequal error protection for the transmission of video encoded with the recently developed scalable extension of H.264/MPEG-4 AVC. The proposed methodology uses a simplified Viterbi-based search method which significantly outperforms the classical exhaustive search method in terms of computational complexity, leading to a practically applicable solution at the expense of a minimal loss of optimality. Experimental results show the effectiveness of our protection methodology and illustrate its capability to provide graceful degradation in the presence of channel mismatches.
针对最新开发的H.264/MPEG-4 AVC可伸缩扩展编码视频的传输,提出了一种优化的非等保护源信道联合编码方法。该方法采用了一种简化的基于viterbi的搜索方法,该方法在计算复杂度方面显著优于经典的穷举搜索方法,以最小的最优性损失为代价获得了实际适用的解决方案。实验结果表明了我们的保护方法的有效性,并说明了它在信道不匹配的情况下提供优雅退化的能力。
{"title":"Optimal Joint Source-Channel Coding using Unequal Error Protection for the Scalable Extension of H.264/MPEG-4 AVC","authors":"M. Stoufs, A. Munteanu, P. Schelkens, J. Cornelis","doi":"10.1109/ICIP.2007.4379635","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379635","url":null,"abstract":"This paper proposes an optimized joint source-channel coding methodology with unequal error protection for the transmission of video encoded with the recently developed scalable extension of H.264/MPEG-4 AVC. The proposed methodology uses a simplified Viterbi-based search method which significantly outperforms the classical exhaustive search method in terms of computational complexity, leading to a practically applicable solution at the expense of a minimal loss of optimality. Experimental results show the effectiveness of our protection methodology and illustrate its capability to provide graceful degradation in the presence of channel mismatches.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2007 IEEE International Conference on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1