首页 > 最新文献

2008 15th IEEE International Conference on Image Processing最新文献

英文 中文
Decoder side motion vector derivation for inter frame video coding 帧间视频编码解码器侧运动矢量推导
Pub Date : 2009-05-06 DOI: 10.1109/PCS.2009.5167453
S. Kamp, B. Bross, M. Wien
In this paper, a decoder side motion vector derivation scheme for inter frame video coding is proposed. Using a template matching algorithm, motion information is derived at the decoder instead of explicitly coding the information into the bitstream. Based on Lagrangian rate-distortion optimisation, the encoder locally signals whether motion derivation or forward motion coding is used. While our method exploits multiple reference pictures for improved prediction performance and bitrate reduction, only a small template matching search range is required. Derived motion information is reused to improve the performance of predictive motion vector coding in subsequent blocks. An efficient conditional signalling scheme for motion derivation in Skip blocks is employed. The motion vector derivation method has been implemented as an extension to H.264/AVC. Simulation results show that a bitrate reduction of up to 10.4% over H.264/AVC is achieved by the proposed scheme.
本文提出了一种用于帧间视频编码的解码器侧运动矢量推导方案。使用模板匹配算法,在解码器处导出运动信息,而不是显式地将信息编码到比特流中。基于拉格朗日率失真优化,编码器局部信号表明是否使用运动派生或前向运动编码。虽然我们的方法利用多个参考图片来提高预测性能和降低比特率,但只需要很小的模板匹配搜索范围。派生的运动信息被重用,以提高预测运动矢量编码在后续块中的性能。采用了一种有效的条件信令方案来实现跳块中的运动推导。作为H.264/AVC的扩展,实现了运动矢量推导方法。仿真结果表明,与H.264/AVC相比,该方案的码率降低了10.4%。
{"title":"Decoder side motion vector derivation for inter frame video coding","authors":"S. Kamp, B. Bross, M. Wien","doi":"10.1109/PCS.2009.5167453","DOIUrl":"https://doi.org/10.1109/PCS.2009.5167453","url":null,"abstract":"In this paper, a decoder side motion vector derivation scheme for inter frame video coding is proposed. Using a template matching algorithm, motion information is derived at the decoder instead of explicitly coding the information into the bitstream. Based on Lagrangian rate-distortion optimisation, the encoder locally signals whether motion derivation or forward motion coding is used. While our method exploits multiple reference pictures for improved prediction performance and bitrate reduction, only a small template matching search range is required. Derived motion information is reused to improve the performance of predictive motion vector coding in subsequent blocks. An efficient conditional signalling scheme for motion derivation in Skip blocks is employed. The motion vector derivation method has been implemented as an extension to H.264/AVC. Simulation results show that a bitrate reduction of up to 10.4% over H.264/AVC is achieved by the proposed scheme.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128039073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Analysis of human attractiveness using manifold kernel regression 用流形核回归分析人的吸引力
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711703
B. Davis, S. Lazebnik
This paper uses a recently introduced manifold kernel regression technique to explore the relationship between facial shape and attractiveness on a heterogeneous dataset of over three thousand images gathered from the Web. Using the concept of the Frechet mean of images under a diffeomorphic transformation model, we evolve the average face as a function of attractiveness ratings. Examining these averages and associated deformation maps enables us to discern aggregate shape change trends for male and female faces.
本文使用最近引入的流形核回归技术,在从网络上收集的3000多张图像的异构数据集上探索面部形状和吸引力之间的关系。在微分同构变换模型下,利用图像的Frechet均值的概念,我们将平均脸进化为吸引力评级的函数。通过研究这些平均值和相关的变形图,我们可以辨别出男性和女性面部形状变化的总体趋势。
{"title":"Analysis of human attractiveness using manifold kernel regression","authors":"B. Davis, S. Lazebnik","doi":"10.1109/ICIP.2008.4711703","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711703","url":null,"abstract":"This paper uses a recently introduced manifold kernel regression technique to explore the relationship between facial shape and attractiveness on a heterogeneous dataset of over three thousand images gathered from the Web. Using the concept of the Frechet mean of images under a diffeomorphic transformation model, we evolve the average face as a function of attractiveness ratings. Examining these averages and associated deformation maps enables us to discern aggregate shape change trends for male and female faces.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"518 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123118931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Linear ego-motion recovery algorithm based on quasi-parallax 基于准视差的线性自运动恢复算法
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711734
Chuanxin Hu, L. Cheong
A parallel camera array resembles a large class of biological visual systems. It consists of two cameras moving in tandem, which have parallel viewing directions and no overlap in the visual fields. Without correspondences, we leverage on pair of parallel visual rays to remove rotational flows and obtain a quasi-parallax motion field, which leads to an accurate and parsimonious solution for translation recovery. The rotation is subsequently recovered using the epipolar constraints and benefits greatly from the good translation estimate. Experimental results show that the linear and the bundle adjustment methods achieve comparable performances.
平行相机阵列类似于一大类生物视觉系统。它由两个串联移动的摄像机组成,它们具有平行的观看方向,并且在视野中没有重叠。在没有对应的情况下,我们利用一对平行的视光来去除旋转流并获得准视差运动场,从而得到一个精确而简洁的平移恢复解决方案。随后利用极极约束恢复旋转,并从良好的平移估计中获益良多。实验结果表明,线性调整方法和束调整方法具有相当的性能。
{"title":"Linear ego-motion recovery algorithm based on quasi-parallax","authors":"Chuanxin Hu, L. Cheong","doi":"10.1109/ICIP.2008.4711734","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711734","url":null,"abstract":"A parallel camera array resembles a large class of biological visual systems. It consists of two cameras moving in tandem, which have parallel viewing directions and no overlap in the visual fields. Without correspondences, we leverage on pair of parallel visual rays to remove rotational flows and obtain a quasi-parallax motion field, which leads to an accurate and parsimonious solution for translation recovery. The rotation is subsequently recovered using the epipolar constraints and benefits greatly from the good translation estimate. Experimental results show that the linear and the bundle adjustment methods achieve comparable performances.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116656316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motion detection with an unstable camera 带有不稳定摄像头的运动检测
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711733
Pierre-Marc Jodoin, J. Konrad, Venkatesh Saligrama, Vincent Veilleux-Gaboury
Fast and accurate motion detection in the presence of camera jitter is known to be a difficult problem. Existing statistical methods often produce abundant false positives since jitter-induced motion is difficult to differentiate from scene-induced motion. Although frame alignment by means of camera motion compensation can help resolve such ambiguities, the additional steps of motion estimation and compensation increase the complexity of the overall algorithm. In this paper, we address camera jitter by applying background subtraction to scene dynamics instead of scene photometry. In our method, an object is assumed moving if its dynamical behavior is different from the average dynamics observed in a reference sequence. Our method is conceptually simple, fast, requires little memory, and is easy to train, even on videos containing moving objects. It has been tested and performs well on indoor and outdoor sequences with strong camera jitter.
在摄像机抖动存在的情况下,快速准确的运动检测是一个难题。现有的统计方法往往产生大量的误报,因为抖动引起的运动很难与场景引起的运动区分开来。虽然通过摄像机运动补偿的帧对齐可以帮助解决这类模糊问题,但是额外的运动估计和补偿步骤增加了整个算法的复杂性。在本文中,我们通过将背景减法应用于场景动态而不是场景光度来解决相机抖动问题。在我们的方法中,如果一个物体的动力学行为不同于在参考序列中观察到的平均动力学,则假定它是运动的。我们的方法在概念上简单,快速,需要很少的内存,并且易于训练,即使在包含移动物体的视频上也是如此。它已经过测试,并表现良好的室内和室外序列与强相机抖动。
{"title":"Motion detection with an unstable camera","authors":"Pierre-Marc Jodoin, J. Konrad, Venkatesh Saligrama, Vincent Veilleux-Gaboury","doi":"10.1109/ICIP.2008.4711733","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711733","url":null,"abstract":"Fast and accurate motion detection in the presence of camera jitter is known to be a difficult problem. Existing statistical methods often produce abundant false positives since jitter-induced motion is difficult to differentiate from scene-induced motion. Although frame alignment by means of camera motion compensation can help resolve such ambiguities, the additional steps of motion estimation and compensation increase the complexity of the overall algorithm. In this paper, we address camera jitter by applying background subtraction to scene dynamics instead of scene photometry. In our method, an object is assumed moving if its dynamical behavior is different from the average dynamics observed in a reference sequence. Our method is conceptually simple, fast, requires little memory, and is easy to train, even on videos containing moving objects. It has been tested and performs well on indoor and outdoor sequences with strong camera jitter.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121067192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Multi bearer channel resource allocation for optimised transmission of video objects 多承载信道资源分配,优化视频对象的传输
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4712451
S. Nasir, S. Worrall, M. Mrak, A. Kondoz
This paper presents a novel channel optimisation scheme that enhances the quality of object based video, transmitted over a fixed bandwidth channel. The optimisation methodology is based on an accurate modelling of video packet distortion at the encoder. Video packets are ranked according to their expected distortion and are then mapped to one of a number of different priority radio bearers. In the proposed scheme the video compression technique uses motion compensated prediction and video frames are split into a number of video packets. The algorithm performance is demonstrated for object based MPEG-4 video transmission over a UMTS/FDD system. The results demonstrate that the performance gain achieved with the proposed scheme can reach 2 dB, compared with the equal error protection scheme for video transmission over a fixed bandwidth channel.
本文提出了一种新的信道优化方案,提高了在固定带宽信道上传输的基于对象的视频的质量。优化方法是基于在编码器的视频数据包失真的精确建模。视频数据包根据其预期失真程度进行排序,然后映射到许多不同优先级的无线电承载者之一。在该方案中,视频压缩技术使用运动补偿预测,视频帧被分割成多个视频包。该算法在UMTS/FDD系统上基于对象的MPEG-4视频传输中的性能得到了验证。结果表明,在固定带宽的视频传输中,与等误码保护方案相比,该方案的性能增益可达2 dB。
{"title":"Multi bearer channel resource allocation for optimised transmission of video objects","authors":"S. Nasir, S. Worrall, M. Mrak, A. Kondoz","doi":"10.1109/ICIP.2008.4712451","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4712451","url":null,"abstract":"This paper presents a novel channel optimisation scheme that enhances the quality of object based video, transmitted over a fixed bandwidth channel. The optimisation methodology is based on an accurate modelling of video packet distortion at the encoder. Video packets are ranked according to their expected distortion and are then mapped to one of a number of different priority radio bearers. In the proposed scheme the video compression technique uses motion compensated prediction and video frames are split into a number of video packets. The algorithm performance is demonstrated for object based MPEG-4 video transmission over a UMTS/FDD system. The results demonstrate that the performance gain achieved with the proposed scheme can reach 2 dB, compared with the equal error protection scheme for video transmission over a fixed bandwidth channel.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121129570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Texture modulation-constrained image decomposition 纹理调制约束的图像分解
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711874
Georgios Evangelopoulos, P. Maragos
Texture modeling and separation of structure in images are treated in synergy. A variational image decomposition scheme is formulated using explicit texture reconstruction constraints from the outputs of linear filters tuned to different spatial frequencies and orientations. Relevant to the texture image part information is reconstructed using modulation modeling and component selection. The general formulation leads to a u + Kv model of K + 1 image components, with multiple texture subcomponents.
纹理建模和图像结构分离是协同处理的。根据不同空间频率和方向的线性滤波器输出的明确纹理重建约束,提出了一种变分图像分解方案。采用调制建模和分量选择的方法重构与纹理图像相关的部分信息。一般公式得到K + 1个图像分量的u + Kv模型,具有多个纹理子分量。
{"title":"Texture modulation-constrained image decomposition","authors":"Georgios Evangelopoulos, P. Maragos","doi":"10.1109/ICIP.2008.4711874","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711874","url":null,"abstract":"Texture modeling and separation of structure in images are treated in synergy. A variational image decomposition scheme is formulated using explicit texture reconstruction constraints from the outputs of linear filters tuned to different spatial frequencies and orientations. Relevant to the texture image part information is reconstructed using modulation modeling and component selection. The general formulation leads to a u + Kv model of K + 1 image components, with multiple texture subcomponents.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121259946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Capturing light field textures for video coding 捕捉光场纹理的视频编码
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4712077
W. Mantzel, J. Romberg
There is a significant amount of redundancy between video frames or images that can be explained by considering these observations as samples of a light field function. By using a compact depth- augmented representation for such a light field function, it may even be possible to tie-together inter-frame dependencies in a more meaningful way than conventional 2-D intensity based motion compensation methods. We propose a depth-augmented layered orthographic light field representation and show how it may be constructed from actual data at a basic level as the solution to an over-determined linear inverse problem. We finally demonstrate the potential utility of such information in video coding with a compression example when this light field side information is given as a simple texture map.
视频帧或图像之间存在大量冗余,可以通过将这些观测结果视为光场函数的样本来解释。通过对这样的光场函数使用紧凑的深度增强表示,它甚至可能以比传统的基于二维强度的运动补偿方法更有意义的方式将帧间依赖关系联系在一起。我们提出了一种深度增强的分层正交光场表示,并展示了如何从基本级别的实际数据构建它作为过度确定的线性逆问题的解决方案。最后,我们用一个压缩示例演示了这些信息在视频编码中的潜在效用,当这些光场侧信息作为一个简单的纹理图给出时。
{"title":"Capturing light field textures for video coding","authors":"W. Mantzel, J. Romberg","doi":"10.1109/ICIP.2008.4712077","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4712077","url":null,"abstract":"There is a significant amount of redundancy between video frames or images that can be explained by considering these observations as samples of a light field function. By using a compact depth- augmented representation for such a light field function, it may even be possible to tie-together inter-frame dependencies in a more meaningful way than conventional 2-D intensity based motion compensation methods. We propose a depth-augmented layered orthographic light field representation and show how it may be constructed from actual data at a basic level as the solution to an over-determined linear inverse problem. We finally demonstrate the potential utility of such information in video coding with a compression example when this light field side information is given as a simple texture map.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127145741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-graph similarity reinforcement for image annotation refinement 基于多图相似度增强的图像标注细化
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711924
Jimin Jia, Nenghai Yu, Xiaoguang Rui, Mingjing Li
In image annotation refinement, word correlations among candidate annotations are used to reserve high relevant words and remove irrelevant words. Existing methods build word correlations on textual annotations of images. In this paper, visual contents of images are utilized to explore better word correlations by using multi-graph similarity reinforcement method. Firstly, image visual similarity graph and word correlations graph are built respectively. Secondly, the two graphs are iteratively reinforced by each other through image-word transfer matrix. Once the two graphs converge to steady states, the new word correlations graph is used to refine the candidate annotations. The experiments show that our method performs better than method not considering visual content of images.
在图像标注细化中,利用候选标注之间的词相关性来保留相关度高的词,去除不相关的词。现有的方法是在图像的文本注释上建立单词相关性。本文利用图像的视觉内容,采用多图相似度强化方法来探索更好的词相关性。首先,分别构建图像视觉相似图和词相关图;其次,通过图像-文字传递矩阵对两个图进行迭代增强;一旦两个图收敛到稳定状态,就使用新的单词关联图来改进候选注释。实验表明,该方法优于不考虑图像视觉内容的方法。
{"title":"Multi-graph similarity reinforcement for image annotation refinement","authors":"Jimin Jia, Nenghai Yu, Xiaoguang Rui, Mingjing Li","doi":"10.1109/ICIP.2008.4711924","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711924","url":null,"abstract":"In image annotation refinement, word correlations among candidate annotations are used to reserve high relevant words and remove irrelevant words. Existing methods build word correlations on textual annotations of images. In this paper, visual contents of images are utilized to explore better word correlations by using multi-graph similarity reinforcement method. Firstly, image visual similarity graph and word correlations graph are built respectively. Secondly, the two graphs are iteratively reinforced by each other through image-word transfer matrix. Once the two graphs converge to steady states, the new word correlations graph is used to refine the candidate annotations. The experiments show that our method performs better than method not considering visual content of images.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127291209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Peer-to-peer multicast live video streaming with interactive virtual pan/tilt/zoom functionality 点对点多播直播视频流与交互式虚拟平移/倾斜/缩放功能
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4712250
Aditya Mavlankar, Jeonghun Noh, Pierpaolo Baccichet, B. Girod
Video streaming with virtual pan/tilt/zoom functionality allows the viewer to watch arbitrary regions of a high-spatial-resolution scene. In our proposed system, the user controls his region-of-interest (ROI) interactively during the streaming session. The relevant portion of the scene is rendered on his screen immediately. An additional thumbnail overview aids his navigation. We design a peer-to-peer (P2P) multicast live video streaming system to provide the control of interactive region-of-interest (IROI) to large populations of viewers while exploiting the overlap of ROIs for efficient and scalable delivery. Our P2P overlay is altered on-the-fly in a distributed manner with the changing ROIs of the peers. The main challenges for such a system are posed by the stringent latency constraint, the churn in the ROIs of peers and the limited bandwidth at the server hosting the IROI video session. Experimental results with a network simulator indicate that the delivered quality is close to that of an alternative traditional unicast client-server delivery mechanism yet requiring less uplink capacity at the server.
具有虚拟平移/倾斜/缩放功能的视频流允许观看者观看高空间分辨率场景的任意区域。在我们提出的系统中,用户在流媒体会话期间交互式地控制他的兴趣区域(ROI)。场景的相关部分立即呈现在他的屏幕上。一个额外的缩略图概述有助于他的导航。我们设计了一个点对点(P2P)多播直播视频流系统,为大量观众提供交互式兴趣区域(IROI)的控制,同时利用roi的重叠来实现高效和可扩展的交付。我们的P2P覆盖以分布式的方式随对等点的roi变化而动态改变。这种系统面临的主要挑战是严格的延迟限制、对等方roi的波动以及托管IROI视频会话的服务器的有限带宽。网络模拟器的实验结果表明,交付质量接近传统的单播客户端-服务器交付机制,但在服务器上需要较少的上行链路容量。
{"title":"Peer-to-peer multicast live video streaming with interactive virtual pan/tilt/zoom functionality","authors":"Aditya Mavlankar, Jeonghun Noh, Pierpaolo Baccichet, B. Girod","doi":"10.1109/ICIP.2008.4712250","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4712250","url":null,"abstract":"Video streaming with virtual pan/tilt/zoom functionality allows the viewer to watch arbitrary regions of a high-spatial-resolution scene. In our proposed system, the user controls his region-of-interest (ROI) interactively during the streaming session. The relevant portion of the scene is rendered on his screen immediately. An additional thumbnail overview aids his navigation. We design a peer-to-peer (P2P) multicast live video streaming system to provide the control of interactive region-of-interest (IROI) to large populations of viewers while exploiting the overlap of ROIs for efficient and scalable delivery. Our P2P overlay is altered on-the-fly in a distributed manner with the changing ROIs of the peers. The main challenges for such a system are posed by the stringent latency constraint, the churn in the ROIs of peers and the limited bandwidth at the server hosting the IROI video session. Experimental results with a network simulator indicate that the delivered quality is close to that of an alternative traditional unicast client-server delivery mechanism yet requiring less uplink capacity at the server.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125136368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Adaptive-neighborhood best mean rank vector filter for impulsive noise removal 用于脉冲噪声去除的自适应邻域最佳均值秩向量滤波器
Pub Date : 2008-12-12 DOI: 10.1109/ICIP.2008.4711879
M. Ciuc, V. Vrabie, M. Herbin, C. Vertan, P. Vautrot
Rank-order based filters are usually implemented using reduced ordering, since there is no natural way to order vector data, such as color pixel values. This paper proposes a new statistics for multivariate data which is a mean rank obtained by aggregating partial ordering ranks. This statistics is then used for the reduced ordering of vector data; the median statistic is characterized by the best mean rank vector (BMRV). We devise two filtering structures based on the BMRV statistics: one that uses a classical square neighborhood, and one which is based on adaptive neighborhoods. We show that the proposed filters are highly effective for filtering color images heavily corrupted by impulsive noise, and compare favorably to state-of-the-art filtering structures.
基于秩序的过滤器通常使用简化排序来实现,因为没有自然的方法来排序向量数据,比如颜色像素值。本文提出了一种新的多元数据统计量,即偏序秩聚合得到的平均秩。该统计数据随后用于矢量数据的简化排序;中位数统计量用最佳平均秩向量(BMRV)表征。我们设计了两种基于BMRV统计的滤波结构:一种是使用经典的方形邻域,另一种是基于自适应邻域。我们表明,所提出的滤波器对于过滤被脉冲噪声严重损坏的彩色图像非常有效,并且与最先进的滤波结构相比具有优势。
{"title":"Adaptive-neighborhood best mean rank vector filter for impulsive noise removal","authors":"M. Ciuc, V. Vrabie, M. Herbin, C. Vertan, P. Vautrot","doi":"10.1109/ICIP.2008.4711879","DOIUrl":"https://doi.org/10.1109/ICIP.2008.4711879","url":null,"abstract":"Rank-order based filters are usually implemented using reduced ordering, since there is no natural way to order vector data, such as color pixel values. This paper proposes a new statistics for multivariate data which is a mean rank obtained by aggregating partial ordering ranks. This statistics is then used for the reduced ordering of vector data; the median statistic is characterized by the best mean rank vector (BMRV). We devise two filtering structures based on the BMRV statistics: one that uses a classical square neighborhood, and one which is based on adaptive neighborhoods. We show that the proposed filters are highly effective for filtering color images heavily corrupted by impulsive noise, and compare favorably to state-of-the-art filtering structures.","PeriodicalId":247944,"journal":{"name":"2008 15th IEEE International Conference on Image Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125151476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2008 15th IEEE International Conference on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1