首页 > 最新文献

2007 IEEE International Conference on Image Processing最新文献

英文 中文
Segmentation of Medical Ultrasound Images using Active Contours 基于活动轮廓的医学超声图像分割
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379878
O. Michailovich, A. Tannenbaum
Segmentation of medical ultrasound images (e.g., for the purpose of surgical or radiotherapy planning) is known to be a difficult task due to the relatively low resolution and reduced contrast of the images, as well as due to the discontinuity and uncertainty of segmentation boundaries caused by speckle noise. Under such conditions, useful segmentation results seem to be only achievable by means of relatively complex algorithms, which are usually computationally involved and/or require a prior learning. In this paper, a different approach to the problem of segmentation of medical ultrasound images is proposed. In particular, we propose to preprocess the images before they are subjected to a segmentation procedure. The proposed preprocessing modifies the images (without affecting their anatomic contents) so that the resulting images can be effectively segmented by relatively simple and computationally efficient means. The performance of the proposed method is tested in a series of both in silico and in vivo experiments.
医学超声图像的分割(例如,用于手术或放疗计划)是一项艰巨的任务,因为图像的分辨率相对较低,对比度降低,以及由于散斑噪声引起的分割边界的不连续和不确定性。在这种情况下,有用的分割结果似乎只能通过相对复杂的算法来实现,这些算法通常涉及计算和/或需要事先学习。本文提出了一种不同的医学超声图像分割方法。特别是,我们建议在对图像进行分割之前对其进行预处理。所提出的预处理修改图像(不影响其解剖内容),使所得图像可以通过相对简单和计算效率高的手段进行有效分割。所提出的方法的性能在一系列的硅和体内实验中进行了测试。
{"title":"Segmentation of Medical Ultrasound Images using Active Contours","authors":"O. Michailovich, A. Tannenbaum","doi":"10.1109/ICIP.2007.4379878","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379878","url":null,"abstract":"Segmentation of medical ultrasound images (e.g., for the purpose of surgical or radiotherapy planning) is known to be a difficult task due to the relatively low resolution and reduced contrast of the images, as well as due to the discontinuity and uncertainty of segmentation boundaries caused by speckle noise. Under such conditions, useful segmentation results seem to be only achievable by means of relatively complex algorithms, which are usually computationally involved and/or require a prior learning. In this paper, a different approach to the problem of segmentation of medical ultrasound images is proposed. In particular, we propose to preprocess the images before they are subjected to a segmentation procedure. The proposed preprocessing modifies the images (without affecting their anatomic contents) so that the resulting images can be effectively segmented by relatively simple and computationally efficient means. The performance of the proposed method is tested in a series of both in silico and in vivo experiments.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
A Novel Facial Feature Point Localization Method on 3D Faces 一种新的三维人脸特征点定位方法
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379248
Peng Guan, Yaoliang Yu, Liming Zhang
Although 2D-based face recognition methods have made great progress in the past decades, there are also some unsolved problems such as PIE. Recently, more and more researchers have focused on 3D-based face recognition approaches. Among these techniques, facial feature point localization plays an important role in representing and matching 3D faces. In this paper, we present a novel feature point localization method on 3D faces combining global shape model and local surface model. Bezier surface is introduced to represent local structure of different feature points and global shape model is utilized to constrain the local search result. Experimental results based on comparison of our method and curvature analysis show the feasibility and efficiency of the new idea.
虽然基于2d的人脸识别方法在过去的几十年里取得了很大的进步,但也存在一些未解决的问题,比如PIE。近年来,越来越多的研究人员开始关注基于3d的人脸识别方法。其中,人脸特征点定位在三维人脸的表示和匹配中起着重要作用。本文提出了一种结合全局形状模型和局部表面模型的三维人脸特征点定位方法。引入Bezier曲面表示不同特征点的局部结构,利用全局形状模型约束局部搜索结果。通过与曲率分析方法的比较,实验结果表明了该方法的可行性和有效性。
{"title":"A Novel Facial Feature Point Localization Method on 3D Faces","authors":"Peng Guan, Yaoliang Yu, Liming Zhang","doi":"10.1109/ICIP.2007.4379248","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379248","url":null,"abstract":"Although 2D-based face recognition methods have made great progress in the past decades, there are also some unsolved problems such as PIE. Recently, more and more researchers have focused on 3D-based face recognition approaches. Among these techniques, facial feature point localization plays an important role in representing and matching 3D faces. In this paper, we present a novel feature point localization method on 3D faces combining global shape model and local surface model. Bezier surface is introduced to represent local structure of different feature points and global shape model is utilized to constrain the local search result. Experimental results based on comparison of our method and curvature analysis show the feasibility and efficiency of the new idea.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134359246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Knowledge Structuring Technique for Image Classification 一种用于图像分类的知识结构技术
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379600
Le Dong, E. Izquierdo
A system for image analysis and classification based on a knowledge structuring technique is presented. The knowledge structuring technique automatically creates a relevance map from salient areas of natural images. It also derives a set of well-structured representations from low-level description to drive the final classification. The backbone of the knowledge structuring technique is a distribution mapping strategy involving two basic modules: structured low-level feature extraction using convolution neural network and a topology representation module based on a growing cell structure network. Classification is achieved by simulating high-level top-down visual information perception and classifying using an incremental Bayesian parameter estimation method. The proposed modular system architecture offers straightforward expansion to include user relevance feedback, contextual input, and multimodal information if available.
提出了一种基于知识结构技术的图像分析与分类系统。知识结构技术从自然图像的显著区域自动生成关联图。它还从低级描述中派生出一组结构良好的表示,以驱动最终的分类。知识结构技术的核心是分布映射策略,涉及两个基本模块:使用卷积神经网络的结构化低层特征提取和基于生长细胞结构网络的拓扑表示模块。通过模拟高层自上而下的视觉信息感知,并使用增量贝叶斯参数估计方法进行分类,实现分类。所提出的模块化系统架构提供了直接的扩展,以包括用户相关反馈、上下文输入和多模式信息(如果可用)。
{"title":"A Knowledge Structuring Technique for Image Classification","authors":"Le Dong, E. Izquierdo","doi":"10.1109/ICIP.2007.4379600","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379600","url":null,"abstract":"A system for image analysis and classification based on a knowledge structuring technique is presented. The knowledge structuring technique automatically creates a relevance map from salient areas of natural images. It also derives a set of well-structured representations from low-level description to drive the final classification. The backbone of the knowledge structuring technique is a distribution mapping strategy involving two basic modules: structured low-level feature extraction using convolution neural network and a topology representation module based on a growing cell structure network. Classification is achieved by simulating high-level top-down visual information perception and classifying using an incremental Bayesian parameter estimation method. The proposed modular system architecture offers straightforward expansion to include user relevance feedback, contextual input, and multimodal information if available.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131475158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Dynamic Scene Model for Moving Object Detection 用于运动目标检测的精确动态场景模型
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379545
Hong Yang, Yihua Tan, J. Tian, Jian Liu
Adaptive pixel-wise Gaussian mixture model (GMM) is a popular method to model dynamic scenes viewed by a fixed camera. However, it is not a trivial problem for GMM to capture the accurate mean and variance of a complex pixel. This paper presents a two-layer Gaussian mixture model (TLGMM) of dynamic scenes for moving object detection. The first layer, namely real model, deals with gradually changing pixels specially; the second layer, called on-ready model, focuses on those pixels changing significantly and irregularly. TLGMM can represent dynamic scenes more accurately and effectively. Additionally, a long term and a short term variance are taken into account to alleviate the transparent problems faced by pixel-based methods.
自适应逐像素高斯混合模型(GMM)是一种常用的模拟固定摄像机拍摄的动态场景的方法。然而,对于GMM来说,准确捕获复杂像素的均值和方差并不是一个简单的问题。提出了一种用于动态场景运动目标检测的两层高斯混合模型。第一层即真实模型,专门处理逐渐变化的像素;第二层,被称为on-ready模型,专注于那些显著和不规则变化的像素。TLGMM可以更准确有效地表示动态场景。此外,还考虑了长期和短期方差,以缓解基于像素的方法面临的透明度问题。
{"title":"Accurate Dynamic Scene Model for Moving Object Detection","authors":"Hong Yang, Yihua Tan, J. Tian, Jian Liu","doi":"10.1109/ICIP.2007.4379545","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379545","url":null,"abstract":"Adaptive pixel-wise Gaussian mixture model (GMM) is a popular method to model dynamic scenes viewed by a fixed camera. However, it is not a trivial problem for GMM to capture the accurate mean and variance of a complex pixel. This paper presents a two-layer Gaussian mixture model (TLGMM) of dynamic scenes for moving object detection. The first layer, namely real model, deals with gradually changing pixels specially; the second layer, called on-ready model, focuses on those pixels changing significantly and irregularly. TLGMM can represent dynamic scenes more accurately and effectively. Additionally, a long term and a short term variance are taken into account to alleviate the transparent problems faced by pixel-based methods.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132795622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Using a Markov Network to Recognize People in Consumer Images 利用马尔可夫网络识别消费者图像中的人物
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4380061
Andrew C. Gallagher, Tsuhan Chen
Markov networks are an effective tool for the difficult but important problem of recognizing people in consumer image collections. Given a small set of labeled faces, we seek to recognize the other faces in an image collection. The constraints of the problem are exploited when forming the Markov network edge potentials. Inference is also used to suggest faces for the user to label, minimizing the work on the part of the user. In one test set containing 4 individuals, an 86% recognition rate is achieved with only 3 labeled examples.
马尔可夫网络是一个有效的工具,用于识别消费者图像集合中的人物这一困难但重要的问题。给定一小组标记的面孔,我们试图识别图像集合中的其他面孔。在形成马尔可夫网络边势时利用了问题的约束条件。推理也用于建议用户标记的面孔,最大限度地减少用户的工作量。在一个包含4个个体的测试集中,仅使用3个标记样例,识别率就达到了86%。
{"title":"Using a Markov Network to Recognize People in Consumer Images","authors":"Andrew C. Gallagher, Tsuhan Chen","doi":"10.1109/ICIP.2007.4380061","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4380061","url":null,"abstract":"Markov networks are an effective tool for the difficult but important problem of recognizing people in consumer image collections. Given a small set of labeled faces, we seek to recognize the other faces in an image collection. The constraints of the problem are exploited when forming the Markov network edge potentials. Inference is also used to suggest faces for the user to label, minimizing the work on the part of the user. In one test set containing 4 individuals, an 86% recognition rate is achieved with only 3 labeled examples.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131016192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Fast Mode Decision for Intra Prediction in H.264/AVC Encoder H.264/AVC编码器帧内预测的快速模式决策
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379830
Byeongdu La, Minyoung Eom, Yoonsik Choe
The H.264/AVC video coding standard uses the rate distortion optimization (RDO) method to improve the compression performance in the intra prediction. Whereas the computational complexity is increased comparing with previous standards due to this method, even though this standard selects the best coding mode for the current macroblock. In this paper, a fast intra mode decision algorithm for H.264/AVC encoder based on dominant edge direction (DED) is proposed. The algorithm uses the approximation of discrete cosine transform (DCT) coefficient formula. By detecting the DED before intra prediction, 3 modes instead of 9 modes are chosen for RDO calculation to decide the best mode in the 4 times 4 luma block. For the 16 times 16 luma and the 8 times 8 chroma, instead of 4 modes, only 2 modes are chosen. Experimental results show that the computation time of the proposed algorithm is decreased to about 71% of the full search method in the reference code with negligible quality loss.
H.264/AVC视频编码标准在帧内预测中采用了率失真优化(RDO)方法来提高压缩性能。然而,尽管该标准为当前宏块选择了最佳编码模式,但由于该方法的存在,与以前的标准相比,计算复杂度增加了。提出了一种基于优势边方向(DED)的H.264/AVC编码器模式内快速决策算法。该算法采用离散余弦变换(DCT)系数近似公式。通过在预测前检测DED,选择3种模式而不是9种模式进行RDO计算,以确定4 × 4 luma块中的最佳模式。对于16 × 16亮度和8 × 8色度,只选择2种模式,而不是4种模式。实验结果表明,该算法的计算时间约为参考码中全搜索方法的71%,且质量损失可以忽略不计。
{"title":"Fast Mode Decision for Intra Prediction in H.264/AVC Encoder","authors":"Byeongdu La, Minyoung Eom, Yoonsik Choe","doi":"10.1109/ICIP.2007.4379830","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379830","url":null,"abstract":"The H.264/AVC video coding standard uses the rate distortion optimization (RDO) method to improve the compression performance in the intra prediction. Whereas the computational complexity is increased comparing with previous standards due to this method, even though this standard selects the best coding mode for the current macroblock. In this paper, a fast intra mode decision algorithm for H.264/AVC encoder based on dominant edge direction (DED) is proposed. The algorithm uses the approximation of discrete cosine transform (DCT) coefficient formula. By detecting the DED before intra prediction, 3 modes instead of 9 modes are chosen for RDO calculation to decide the best mode in the 4 times 4 luma block. For the 16 times 16 luma and the 8 times 8 chroma, instead of 4 modes, only 2 modes are chosen. Experimental results show that the computation time of the proposed algorithm is decreased to about 71% of the full search method in the reference code with negligible quality loss.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131160945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Automated Segmentation of Torn Frames using the Graph Cuts Technique 使用图切割技术自动分割破碎的帧
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379015
D. Corrigan, N. Harte, A. Kokaram
Film Tear is a form of degradation in archived film and is the physical ripping of the film material. Tear causes displacement of a region of the degraded frame and the loss of image data along the boundary of tear. In [1], a restoration algorithm was proposed to correct the displacement in the frame introduced by the tear by estimating the global motion of the 2 regions either side of the tear. However, the algorithm depended on a user-defined segmentation to divide the frame. This paper presents a new fully-automated segmentation algorithm which divides affected frames along the tear. The algorithm employs the graph cuts optimisation technique and uses temporal intensity differences, rather than spatial gradient, to describe the boundary properties of the segmentation. Segmentations produced with the proposed algorithm agree well with the perceived correct segmentation.
胶片撕裂是存档胶片的一种退化形式,是胶片材料的物理撕裂。撕裂引起退化帧的一个区域的位移和沿撕裂边界的图像数据丢失。在[1]中,提出了一种恢复算法,通过估计撕裂两侧两个区域的全局运动来纠正撕裂在帧中引入的位移。然而,该算法依赖于用户自定义的分割来分割帧。本文提出了一种新的全自动分割算法,该算法沿撕裂方向分割受影响的帧。该算法采用图切割优化技术,并使用时间强度差异,而不是空间梯度,来描述分割的边界属性。该算法得到的分割结果与感知到的正确分割结果吻合较好。
{"title":"Automated Segmentation of Torn Frames using the Graph Cuts Technique","authors":"D. Corrigan, N. Harte, A. Kokaram","doi":"10.1109/ICIP.2007.4379015","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379015","url":null,"abstract":"Film Tear is a form of degradation in archived film and is the physical ripping of the film material. Tear causes displacement of a region of the degraded frame and the loss of image data along the boundary of tear. In [1], a restoration algorithm was proposed to correct the displacement in the frame introduced by the tear by estimating the global motion of the 2 regions either side of the tear. However, the algorithm depended on a user-defined segmentation to divide the frame. This paper presents a new fully-automated segmentation algorithm which divides affected frames along the tear. The algorithm employs the graph cuts optimisation technique and uses temporal intensity differences, rather than spatial gradient, to describe the boundary properties of the segmentation. Segmentations produced with the proposed algorithm agree well with the perceived correct segmentation.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131174104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Characterizing packet-loss impairments in compressed video 压缩视频中丢包损伤的表征
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4379769
A. Reibman, D. Poole
We examine metrics to predict the visibility of packet losses in MPEG-2 and H.264 compressed video. We use subjective data that has a wide range of parameters, including different error concealment strategies and different compression standards. We evaluate SSIM, MSE, and a slice-boundary mismatch (SBM) metric for their effectiveness at characterizing packet-loss impairments.
我们检查指标来预测MPEG-2和H.264压缩视频中数据包丢失的可见性。我们使用具有广泛参数范围的主观数据,包括不同的错误隐藏策略和不同的压缩标准。我们评估了SSIM、MSE和切片边界不匹配(SBM)指标在表征丢包损伤方面的有效性。
{"title":"Characterizing packet-loss impairments in compressed video","authors":"A. Reibman, D. Poole","doi":"10.1109/ICIP.2007.4379769","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4379769","url":null,"abstract":"We examine metrics to predict the visibility of packet losses in MPEG-2 and H.264 compressed video. We use subjective data that has a wide range of parameters, including different error concealment strategies and different compression standards. We evaluate SSIM, MSE, and a slice-boundary mismatch (SBM) metric for their effectiveness at characterizing packet-loss impairments.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Finding Regions of Interest in Home Videos Based on Camera Motion 基于摄像机运动的家庭视频兴趣区域搜索
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4380075
Golnaz Abdollahian, E. Delp
In this paper, we propose an algorithm for identifying regions of interest (ROIs) in video, particularly for the keyframes extracted from a home video. The camera motion is introduced as a new factor that can influence the visual saliency. The global motion parameters are used to generate location-based importance maps. These maps can be combined with other saliency maps calculated using other visual and high-level features. Here, we employed the contrast-based saliency as an important low level factor along with face detection as a high level feature in our approach.
在本文中,我们提出了一种识别视频中感兴趣区域(roi)的算法,特别是对于从家庭视频中提取的关键帧。摄像机运动作为影响视觉显著性的新因素被引入。全局运动参数用于生成基于位置的重要性图。这些地图可以与使用其他视觉和高级特征计算的其他显著性地图相结合。在我们的方法中,我们将基于对比度的显著性作为一个重要的低水平因素,同时将人脸检测作为一个高水平特征。
{"title":"Finding Regions of Interest in Home Videos Based on Camera Motion","authors":"Golnaz Abdollahian, E. Delp","doi":"10.1109/ICIP.2007.4380075","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4380075","url":null,"abstract":"In this paper, we propose an algorithm for identifying regions of interest (ROIs) in video, particularly for the keyframes extracted from a home video. The camera motion is introduced as a new factor that can influence the visual saliency. The global motion parameters are used to generate location-based importance maps. These maps can be combined with other saliency maps calculated using other visual and high-level features. Here, we employed the contrast-based saliency as an important low level factor along with face detection as a high level feature in our approach.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131126565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Tamper Detection Based on Regularity of Wavelet Transform Coefficients 基于小波变换系数规律性的篡改检测
Pub Date : 2007-11-12 DOI: 10.1109/ICIP.2007.4378975
Y. Sutcu, Baris Coskun, H. Sencar, N. Memon
Powerful digital media editing tools make producing good quality forgeries very easy for almost anyone. Therefore, proving the authenticity and integrity of digital media becomes increasingly important. In this work, we propose a simple method to detect image tampering operations that involve sharpness/blurriness adjustment. Our approach is based on the assumption that if a digital image undergoes a copy-paste type of forgery, average sharpness/blurriness value of the forged region is expected to be different as compared to the non-tampered parts of the image. The method of estimating sharpness/blurriness value of an image is based on the regularity properties of wavelet transform coefficients which involves measuring the decay of wavelet transform coefficients across scales. Our preliminary results show that the estimated sharpness/blurriness scores can be used to identify tampered areas of the image.
强大的数字媒体编辑工具使得几乎任何人都可以轻松地制作高质量的伪造品。因此,证明数字媒体的真实性和完整性变得越来越重要。在这项工作中,我们提出了一种简单的方法来检测涉及锐度/模糊度调整的图像篡改操作。我们的方法是基于这样的假设:如果一个数字图像经历了复制粘贴类型的伪造,那么与图像的未篡改部分相比,伪造区域的平均清晰度/模糊度值预计会有所不同。图像清晰度/模糊度的估计方法是基于小波变换系数的规律性,测量小波变换系数在尺度上的衰减。我们的初步结果表明,估计的清晰度/模糊分数可以用来识别图像的篡改区域。
{"title":"Tamper Detection Based on Regularity of Wavelet Transform Coefficients","authors":"Y. Sutcu, Baris Coskun, H. Sencar, N. Memon","doi":"10.1109/ICIP.2007.4378975","DOIUrl":"https://doi.org/10.1109/ICIP.2007.4378975","url":null,"abstract":"Powerful digital media editing tools make producing good quality forgeries very easy for almost anyone. Therefore, proving the authenticity and integrity of digital media becomes increasingly important. In this work, we propose a simple method to detect image tampering operations that involve sharpness/blurriness adjustment. Our approach is based on the assumption that if a digital image undergoes a copy-paste type of forgery, average sharpness/blurriness value of the forged region is expected to be different as compared to the non-tampered parts of the image. The method of estimating sharpness/blurriness value of an image is based on the regularity properties of wavelet transform coefficients which involves measuring the decay of wavelet transform coefficients across scales. Our preliminary results show that the estimated sharpness/blurriness scores can be used to identify tampered areas of the image.","PeriodicalId":131177,"journal":{"name":"2007 IEEE International Conference on Image Processing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2007-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132882413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
期刊
2007 IEEE International Conference on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1