首页 > 最新文献

2014 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Correlation noise modeling for multiview transform domain Wyner-Ziv video coding 多视点变换域Wyner-Ziv视频编码的相关噪声建模
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025648
Catarina Brites, F. Pereira
Multiview Wyner-Ziv (MV-WZ) video coding rate-distortion (RD) performance is highly influenced by the adopted correlation noise model (CNM). In the related literature, the statistics of the correlation noise between the original frame and the side information (SI), typically resulting from the fusion of temporally and inter-view created SIs, is modelled by a Laplacian distribution. In most cases, the Laplacian CNM parameter is estimated using an offline approach, assuming that either the SI is available at the encoder or the originals are available at the decoder which is not realistic. In this context, this paper proposes the first practical, online CNM solution for a multiview transform domain WZ (MV-TDWZ) video codec. The online estimation of the Laplacian CNM parameter is performed at the decoder based on metrics exploring both the temporal and inter-view correlations with two levels of granularity, notably transform band and transform coefficient. The results obtained show that better RD performance is achieved for the finest granularity level since the inter-view, temporal and spatial correlations are exploited with the highest adaptation.
多视点Wyner-Ziv (MV-WZ)视频编码率失真(RD)性能受到所采用的相关噪声模型(CNM)的高度影响。在相关文献中,原始帧和侧信息(SI)之间的相关噪声的统计,通常是由时间和内部视图创建的SI融合产生的,用拉普拉斯分布建模。在大多数情况下,拉普拉斯CNM参数是使用离线方法估计的,假设SI在编码器处可用,或者原件在解码器处可用,这是不现实的。在此背景下,本文提出了第一个实用的多视点变换域WZ (MV-TDWZ)视频编解码器的在线CNM解决方案。拉普拉斯CNM参数的在线估计是在解码器上进行的,基于度量,探索两个粒度级别的时间和视图间相关性,特别是变换频带和变换系数。结果表明,在最细的粒度水平上,由于利用了视间相关性、时间相关性和空间相关性,具有最高的适应性,因此可以获得更好的RD性能。
{"title":"Correlation noise modeling for multiview transform domain Wyner-Ziv video coding","authors":"Catarina Brites, F. Pereira","doi":"10.1109/ICIP.2014.7025648","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025648","url":null,"abstract":"Multiview Wyner-Ziv (MV-WZ) video coding rate-distortion (RD) performance is highly influenced by the adopted correlation noise model (CNM). In the related literature, the statistics of the correlation noise between the original frame and the side information (SI), typically resulting from the fusion of temporally and inter-view created SIs, is modelled by a Laplacian distribution. In most cases, the Laplacian CNM parameter is estimated using an offline approach, assuming that either the SI is available at the encoder or the originals are available at the decoder which is not realistic. In this context, this paper proposes the first practical, online CNM solution for a multiview transform domain WZ (MV-TDWZ) video codec. The online estimation of the Laplacian CNM parameter is performed at the decoder based on metrics exploring both the temporal and inter-view correlations with two levels of granularity, notably transform band and transform coefficient. The results obtained show that better RD performance is achieved for the finest granularity level since the inter-view, temporal and spatial correlations are exploited with the highest adaptation.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74633179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Radial distortion correction from a single image of a planar calibration pattern using convex optimization 利用凸优化从平面校准模式的单个图像进行径向畸变校正
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025699
Xianghua Ying, Xiang Mei, Sen Yang, G. Wang, H. Zha
In Hartley-Kang's paper [7], they directly treated a planar calibration pattern as an image to construct an image pair together with a radial distorted image of the planar calibration pattern, and then proposed a very efficient method to determine the center of radial distortion by estimating the epipole in the radial distorted image. After determined the center of radial distortion, a least square method was utilized to recover the radial distortion function using the monotonicity constraints. In this paper, we present a convex optimization method to recover the radial distortion function using the same constraints as those required by Hartley-Kang's method, whereas our method can obtain better results of radial distortion correction. The experiments validate our approach.
在Hartley-Kang的论文[7]中,他们直接将平面定标图案作为图像,与平面定标图案的径向畸变图像构建图像对,然后提出了一种非常有效的方法,通过估计径向畸变图像中的极点来确定径向畸变中心。在确定径向畸变中心后,利用单调性约束,采用最小二乘法恢复径向畸变函数。本文采用与Hartley-Kang方法相同的约束条件,提出了一种凸优化方法来恢复径向畸变函数,而我们的方法可以获得更好的径向畸变校正结果。实验验证了我们的方法。
{"title":"Radial distortion correction from a single image of a planar calibration pattern using convex optimization","authors":"Xianghua Ying, Xiang Mei, Sen Yang, G. Wang, H. Zha","doi":"10.1109/ICIP.2014.7025699","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025699","url":null,"abstract":"In Hartley-Kang's paper [7], they directly treated a planar calibration pattern as an image to construct an image pair together with a radial distorted image of the planar calibration pattern, and then proposed a very efficient method to determine the center of radial distortion by estimating the epipole in the radial distorted image. After determined the center of radial distortion, a least square method was utilized to recover the radial distortion function using the monotonicity constraints. In this paper, we present a convex optimization method to recover the radial distortion function using the same constraints as those required by Hartley-Kang's method, whereas our method can obtain better results of radial distortion correction. The experiments validate our approach.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73028491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Automatic defocus spectral matting 自动散焦光谱抠图
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025879
Hui Zhou, T. Ahonen
Alpha matting for single image is an inherently under-constrained problem and thus normally requires user input. In this paper, an automatic, bottom-up matting algorithm using defocus cue is proposed. Different from most defocus matting algorithms, we first extract matting components by applying unsupervised spectral matting algorithm on single image. The defocus cue is then used for classifying matting components to form a complete foreground matte. This approach gives more robust result because focus estimation is used in component level rather than pixel level.
单个图像的Alpha抠图本质上是一个约束不足的问题,因此通常需要用户输入。本文提出了一种基于离焦线索的自底向上自动抠图算法。与大多数散焦抠图算法不同,我们首先在单幅图像上应用无监督光谱抠图算法提取抠图分量。然后使用散焦线索对消光组件进行分类,以形成完整的前景消光。由于该方法是在分量级而不是像素级进行焦点估计,因此具有更强的鲁棒性。
{"title":"Automatic defocus spectral matting","authors":"Hui Zhou, T. Ahonen","doi":"10.1109/ICIP.2014.7025879","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025879","url":null,"abstract":"Alpha matting for single image is an inherently under-constrained problem and thus normally requires user input. In this paper, an automatic, bottom-up matting algorithm using defocus cue is proposed. Different from most defocus matting algorithms, we first extract matting components by applying unsupervised spectral matting algorithm on single image. The defocus cue is then used for classifying matting components to form a complete foreground matte. This approach gives more robust result because focus estimation is used in component level rather than pixel level.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75293283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A track-before-detect algorithm using joint probabilistic data association filter and interacting multiple models 基于联合概率数据关联滤波和多模型交互的检测前跟踪算法
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026002
Andrea Mazzù, Simone Chiappino, L. Marcenaro, C. Regazzoni
Detection of dim moving point targets in cluttered background can have a great impact on the tracking performances. This may become a crucial problem, especially in low-SNR environments, where target characteristics are highly susceptible to corruption. In this paper, an extended target model, namely Interacting Multiple Model (IMM), applied to Track-Before-Detect (TBD) based detection algorithm, for far objects, in infrared (IR) sequences is presented. The approach can automatically adapts the kinematic parameter estimations, such as position and velocity, in accordance with the predictions as dimensions of the target change. A sub-par sensor can cause tracking problems. In particular, for a single object, noisy observations (i.e. fragmented measures) could be associated to different tracks. In order to avoid this problem, presented framework introduces a cooperative mechanism between Joint Probabilistic Data Association Filter (JPDAF) and IMM. The experimental results on real and simulated sequences demonstrate effectiveness of the proposed approach.
在混乱背景下,弱小运动点目标的检测对跟踪性能有很大影响。这可能会成为一个关键问题,特别是在低信噪比环境中,目标特征非常容易受到破坏。本文提出了一种扩展的目标模型,即相互作用多模型(IMM),并将其应用于红外序列中基于检测前跟踪(TBD)的远距离目标检测算法。该方法可以在目标尺寸发生变化时,根据预测自适应位置、速度等运动参数的估计。低于标准的传感器可能导致跟踪问题。特别是,对于单个对象,噪声观测(即碎片测量)可能与不同的轨道相关联。为了避免这一问题,该框架引入了联合概率数据关联过滤器(JPDAF)与IMM之间的协作机制。在真实序列和模拟序列上的实验结果表明了该方法的有效性。
{"title":"A track-before-detect algorithm using joint probabilistic data association filter and interacting multiple models","authors":"Andrea Mazzù, Simone Chiappino, L. Marcenaro, C. Regazzoni","doi":"10.1109/ICIP.2014.7026002","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026002","url":null,"abstract":"Detection of dim moving point targets in cluttered background can have a great impact on the tracking performances. This may become a crucial problem, especially in low-SNR environments, where target characteristics are highly susceptible to corruption. In this paper, an extended target model, namely Interacting Multiple Model (IMM), applied to Track-Before-Detect (TBD) based detection algorithm, for far objects, in infrared (IR) sequences is presented. The approach can automatically adapts the kinematic parameter estimations, such as position and velocity, in accordance with the predictions as dimensions of the target change. A sub-par sensor can cause tracking problems. In particular, for a single object, noisy observations (i.e. fragmented measures) could be associated to different tracks. In order to avoid this problem, presented framework introduces a cooperative mechanism between Joint Probabilistic Data Association Filter (JPDAF) and IMM. The experimental results on real and simulated sequences demonstrate effectiveness of the proposed approach.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75492616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Latent fingerprint persistence: A new temporal feature space for forensic trace evidence analysis 潜在指纹持久性:一种新的法医痕迹证据分析时间特征空间
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026003
R. Merkel, J. Dittmann, M. Hildebrandt
In forensic applications, traces are often hard to detect and segment from challenging substrates at crime scenes. In this paper, we propose to use the temporal domain of forensic signals as a novel feature space to provide additional information about a trace. In particular we introduce a degree of persistence measure and a protocol for its computation, allowing for a flexible extraction of time domain information based on different features and approximation techniques. At the example of latent fingerprints on semi-/porous surfaces and a CWL sensor, we show the potential of such approach to achieve an increased performance for the challenge of separating prints from background. Based on 36 earlier introduced spectral texture features, we achieve an increased separation performance (0.01 ≤ Δκ ≤ 0.13, respective 0.6% to 6.7%) when using the time domain signal instead of spatial segmentation. The test set consists of 60 different prints on photographic-, catalogue- and copy paper, acquired in a sequence of ten times. We observe a dependency on the used surface as well as the number of consecutive images and identify the accuracy and reproducibility of the capturing device as the main limitation, proposing additional steps for even higher performances in future work.
在法医应用中,痕迹通常很难从犯罪现场的具有挑战性的基材中检测和分割。在本文中,我们建议使用法医信号的时域作为一种新的特征空间来提供有关痕迹的附加信息。特别地,我们引入了一种持久性度量及其计算协议,允许基于不同特征和近似技术灵活地提取时域信息。以半/多孔表面上的潜在指纹和CWL传感器为例,我们展示了这种方法的潜力,可以提高从背景中分离指纹的性能。基于之前介绍的36个光谱纹理特征,我们使用时域信号代替空间分割,获得了更高的分离性能(0.01≤Δκ≤0.13,分别为0.6% ~ 6.7%)。这套测试装置由60张不同的照片组成,这些照片是在照相纸、目录纸和复印纸上以10次的顺序获得的。我们观察到对所用表面的依赖以及连续图像的数量,并确定捕获设备的准确性和可重复性是主要限制,并提出了在未来工作中提高性能的附加步骤。
{"title":"Latent fingerprint persistence: A new temporal feature space for forensic trace evidence analysis","authors":"R. Merkel, J. Dittmann, M. Hildebrandt","doi":"10.1109/ICIP.2014.7026003","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026003","url":null,"abstract":"In forensic applications, traces are often hard to detect and segment from challenging substrates at crime scenes. In this paper, we propose to use the temporal domain of forensic signals as a novel feature space to provide additional information about a trace. In particular we introduce a degree of persistence measure and a protocol for its computation, allowing for a flexible extraction of time domain information based on different features and approximation techniques. At the example of latent fingerprints on semi-/porous surfaces and a CWL sensor, we show the potential of such approach to achieve an increased performance for the challenge of separating prints from background. Based on 36 earlier introduced spectral texture features, we achieve an increased separation performance (0.01 ≤ Δκ ≤ 0.13, respective 0.6% to 6.7%) when using the time domain signal instead of spatial segmentation. The test set consists of 60 different prints on photographic-, catalogue- and copy paper, acquired in a sequence of ten times. We observe a dependency on the used surface as well as the number of consecutive images and identify the accuracy and reproducibility of the capturing device as the main limitation, proposing additional steps for even higher performances in future work.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73953558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Statistics of wavelet coefficients for sparse self-similar images 稀疏自相似图像的小波系数统计
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026230
J. Fageot, E. Bostan, M. Unser
We study the statistics of wavelet coefficients of non-Gaussian images, focusing mainly on the behaviour at coarse scales. We assume that an image can be whitened by a fractional Laplacian operator, which is consistent with an ∥ω∥-γ spectral decay. In other words, we model images as sparse and self-similar stochastic processes within the framework of generalised innovation models. We show that the wavelet coefficients at coarse scales are asymptotically Gaussian even if the prior model for fine scales is sparse. We further refine our analysis by deriving the theoretical evolution of the cumulants of wavelet coefficients across scales. Especially, the evolution of the kurtosis supplies a theoretical prediction for the Gaussianity level at each scale. Finally, we provide simulations and experiments that support our theoretical predictions.
我们研究了非高斯图像的小波系数统计,主要关注其在粗尺度下的行为。我们假设图像可以通过分数阶拉普拉斯算子进行白化,这与∥ω∥-γ谱衰减是一致的。换句话说,我们在广义创新模型的框架内将图像建模为稀疏和自相似的随机过程。我们证明了小波系数在粗尺度下是渐近高斯的,即使精细尺度下的先验模型是稀疏的。通过推导小波系数在不同尺度上的累积量的理论演化,我们进一步完善了我们的分析。特别是峰度的演化为各尺度的高斯性水平提供了理论预测。最后,我们提供了模拟和实验来支持我们的理论预测。
{"title":"Statistics of wavelet coefficients for sparse self-similar images","authors":"J. Fageot, E. Bostan, M. Unser","doi":"10.1109/ICIP.2014.7026230","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026230","url":null,"abstract":"We study the statistics of wavelet coefficients of non-Gaussian images, focusing mainly on the behaviour at coarse scales. We assume that an image can be whitened by a fractional Laplacian operator, which is consistent with an ∥ω∥-γ spectral decay. In other words, we model images as sparse and self-similar stochastic processes within the framework of generalised innovation models. We show that the wavelet coefficients at coarse scales are asymptotically Gaussian even if the prior model for fine scales is sparse. We further refine our analysis by deriving the theoretical evolution of the cumulants of wavelet coefficients across scales. Especially, the evolution of the kurtosis supplies a theoretical prediction for the Gaussianity level at each scale. Finally, we provide simulations and experiments that support our theoretical predictions.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74344857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Image demosaicing by using iterative residual interpolation 基于迭代残差插值的图像去马赛克
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025373
W. Ye, K. Ma
A new demosaicing approach has been introduced recently, which is based on conducting interpolation on the generated residual fields rather than on the color-component difference fields as commonly practiced in most demosaicing methods. In view of its attractive performance delivered by such residual interpolation (RI) strategy, a new RI-based demosaicing method is proposed in this paper that has shown much improved performance. The key success of our approach lies in that the RI process is iteratively deployed to all the three channels for generating a more accurately reconstructed G channel, from which the R channel and the B channel can be better reconstructed as well. Extensive simulations conducted on two commonly-used test datasets have clearly demonstrated that our algorithm is superior to the existing state-of-the-art demosaicing methods, both on objective performance evaluation and on subjective perceptual quality.
近年来提出了一种新的去马赛克方法,该方法是基于对产生的残差场进行插值,而不是像大多数去马赛克方法那样对颜色分量差场进行插值。鉴于残差插值策略具有良好的性能,本文提出了一种新的基于残差插值的去马赛克方法,该方法的性能得到了很大的提高。我们方法的关键成功之处在于,RI过程迭代地部署到所有三个通道,以生成更准确地重构的G通道,从而也可以更好地重构R通道和B通道。在两个常用的测试数据集上进行的大量模拟清楚地表明,我们的算法在客观性能评估和主观感知质量方面都优于现有的最先进的去马赛克方法。
{"title":"Image demosaicing by using iterative residual interpolation","authors":"W. Ye, K. Ma","doi":"10.1109/ICIP.2014.7025373","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025373","url":null,"abstract":"A new demosaicing approach has been introduced recently, which is based on conducting interpolation on the generated residual fields rather than on the color-component difference fields as commonly practiced in most demosaicing methods. In view of its attractive performance delivered by such residual interpolation (RI) strategy, a new RI-based demosaicing method is proposed in this paper that has shown much improved performance. The key success of our approach lies in that the RI process is iteratively deployed to all the three channels for generating a more accurately reconstructed G channel, from which the R channel and the B channel can be better reconstructed as well. Extensive simulations conducted on two commonly-used test datasets have clearly demonstrated that our algorithm is superior to the existing state-of-the-art demosaicing methods, both on objective performance evaluation and on subjective perceptual quality.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78596930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Trajectory clustering for motion pattern extraction in aerial videos 航拍视频中运动模式提取的轨迹聚类
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025203
T. Nawaz, A. Cavallaro, B. Rinner
We present an end-to-end approach for trajectory clustering from aerial videos that enables the extraction of motion patterns in urban scenes. Camera motion is first compensated by mapping object trajectories on a reference plane. Then clustering is performed based on statistics from the Discrete Wavelet Transform coefficients extracted from the trajectories. Finally, motion patterns are identified by distance minimization from the centroids of the trajectory clusters. The experimental validation on four datasets shows the effectiveness of the proposed approach in extracting trajectory clusters. We also make available two new real-world aerial video datasets together with the estimated object trajectories and ground-truth cluster labeling.
我们提出了一种从航拍视频中提取轨迹聚类的端到端方法,该方法可以提取城市场景中的运动模式。相机运动首先通过在参考平面上映射物体轨迹来补偿。然后根据从轨迹中提取的离散小波变换系数的统计量进行聚类。最后,通过轨迹簇质心的距离最小化来识别运动模式。在4个数据集上的实验验证表明了该方法提取轨迹聚类的有效性。我们还提供了两个新的真实世界的航空视频数据集,以及估计的目标轨迹和地面真实聚类标记。
{"title":"Trajectory clustering for motion pattern extraction in aerial videos","authors":"T. Nawaz, A. Cavallaro, B. Rinner","doi":"10.1109/ICIP.2014.7025203","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025203","url":null,"abstract":"We present an end-to-end approach for trajectory clustering from aerial videos that enables the extraction of motion patterns in urban scenes. Camera motion is first compensated by mapping object trajectories on a reference plane. Then clustering is performed based on statistics from the Discrete Wavelet Transform coefficients extracted from the trajectories. Finally, motion patterns are identified by distance minimization from the centroids of the trajectory clusters. The experimental validation on four datasets shows the effectiveness of the proposed approach in extracting trajectory clusters. We also make available two new real-world aerial video datasets together with the estimated object trajectories and ground-truth cluster labeling.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78613013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A conditional random field approach for face identification in broadcast news using overlaid text 基于覆盖文本的广播新闻人脸识别的条件随机场方法
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025063
G. Paul, Khoury Elie, Meignier Sylvain, Odobez Jean-Marc, D. Paul
We investigate the problem of face identification in broadcast programs where people names are obtained from text overlays automatically processed with Optical Character Recognition (OCR) and further linked to the faces throughout the video. To solve the face-name association and propagation, we propose a novel approach that combines the positive effects of two Conditional Random Field (CRF) models: a CRF for person diarization (joint temporal segmentation and association of voices and faces) that benefit from the combination of multiple cues including as main contributions the use of identification sources (OCR appearances) and recurrent local face visual background (LFB) playing the role of a namedness feature; a second CRF for the joint identification of the person clusters that improves identification performance thanks to the use of further diarization statistics. Experiments conducted on a recent and substantial public dataset of 7 different shows demonstrate the interest and complementarity of the different modeling steps and information sources, leading to state of the art results.
我们研究了广播节目中的人脸识别问题,其中人名是从经过光学字符识别(OCR)自动处理的文本叠加中获得的,并进一步与整个视频中的人脸相关联。为了解决人脸-名字的关联和传播问题,我们提出了一种结合两种条件随机场(CRF)模型的积极作用的新方法:一个用于人的特征化的条件随机场(CRF)(联合时间分割和声音和面孔的关联),它受益于多种线索的组合,包括主要贡献的识别源(OCR出现)的使用和重复的局部人脸视觉背景(LFB)的作用。第二个CRF用于人员聚类的联合识别,由于使用了进一步的分类统计,提高了识别性能。在最近的7个不同节目的大量公共数据集上进行的实验表明,不同建模步骤和信息源的兴趣和互补性,导致了最先进的结果。
{"title":"A conditional random field approach for face identification in broadcast news using overlaid text","authors":"G. Paul, Khoury Elie, Meignier Sylvain, Odobez Jean-Marc, D. Paul","doi":"10.1109/ICIP.2014.7025063","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025063","url":null,"abstract":"We investigate the problem of face identification in broadcast programs where people names are obtained from text overlays automatically processed with Optical Character Recognition (OCR) and further linked to the faces throughout the video. To solve the face-name association and propagation, we propose a novel approach that combines the positive effects of two Conditional Random Field (CRF) models: a CRF for person diarization (joint temporal segmentation and association of voices and faces) that benefit from the combination of multiple cues including as main contributions the use of identification sources (OCR appearances) and recurrent local face visual background (LFB) playing the role of a namedness feature; a second CRF for the joint identification of the person clusters that improves identification performance thanks to the use of further diarization statistics. Experiments conducted on a recent and substantial public dataset of 7 different shows demonstrate the interest and complementarity of the different modeling steps and information sources, leading to state of the art results.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78197285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
2D+t autoregressive framework for video texture completion 视频纹理补全的2D+t自回归框架
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025944
Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya
In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.
本文提出了一种改进的2D+t纹理补全框架,提供了高视觉质量的补全动态纹理。利用时空自回归模型(STAR)将若干可用帧的信号传播到包含缺失纹理的帧上。经典的高斯白噪声驱动模型实现纹理创新。为了改进该方法,提出了一种利用现有训练帧的纹理信息的创新过程。该方法具有确定性,解决了基于合成的视频编码等应用中的关键问题。压缩模拟显示,在相当的视觉质量下,纹理序列的潜在比特率节省高达49%。视频结果提供在线,以允许评估完成纹理的视觉质量。
{"title":"2D+t autoregressive framework for video texture completion","authors":"Fabien Racapé, D. Doshkov, Martin Köppel, P. Ndjiki-Nya","doi":"10.1109/ICIP.2014.7025944","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025944","url":null,"abstract":"In this paper, an improved 2D+t texture completion framework is proposed, providing high visual quality of completed dynamic textures. A Spatiotemporal Autoregressive model (STAR) is used to propagate the signal of several available frames onto frames containing missing textures. A Gaussian white noise classically drives the model to enable texture innovation. To improve this method, an innovation process is proposed, that uses texture information from available training frames. The proposed method is deterministic, which solves a key problem for applications such as synthesis-based video coding. Compression simulations show potential bitrate savings up to 49% on texture sequences at comparable visual quality. Video results are provided online to allow assessing the visual quality of completed textures.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78379834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2014 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1