首页 > 最新文献

2011 IEEE 13th International Workshop on Multimedia Signal Processing最新文献

英文 中文
Complexity-adaptive Random Network Coding for Peer-to-Peer video streaming 点对点视频流的复杂度自适应随机网络编码
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093834
A. Fiandrotti, Simone Zezza, E. Magli
We present a novel architecture for complexity-adaptive Random Network Coding (RNC) and its application to Peer-to-Peer (P2P) video streaming. Network coding enables the design of simple and effective P2P video distribution systems, however it relies on computationally intensive packet coding operations that may exceed the computational capabilities of power constrained devices. It is hence desirable that the complexity of network coding can be adjusted at every node according to its computational capabilities, so that different classes of nodes can coexist in the network. To this end, we model the computational complexity of network coding as the sum of a packet decoding cost, which is centrally minimized at the encoder, and a packet recoding cost, which is locally controlled by each node. Efficient network coding is achieved exploiting the packet decoding process as a packet pre-recoding stage, hence increasing the chance that transmitted packets are innovative without increasing the recoding cost. Experiments in a P2P video streaming framework show that the proposed design enables the nodes of the network to operate at a wide range of computational complexity levels, while a higher number of low complexity nodes are able to join the network and experience high-quality video.
提出了一种新的复杂度自适应随机网络编码(RNC)体系结构及其在P2P视频流中的应用。网络编码使设计简单有效的P2P视频分发系统成为可能,但是它依赖于计算密集型的分组编码操作,这可能超出功率受限设备的计算能力。因此,我们希望网络编码的复杂度可以根据每个节点的计算能力进行调整,从而使不同类别的节点在网络中共存。为此,我们将网络编码的计算复杂度建模为在编码器集中最小化的数据包解码成本和由每个节点局部控制的数据包重新编码成本的总和。有效的网络编码是利用数据包的解码过程作为数据包的预编码阶段,从而在不增加编码成本的情况下增加传输数据包创新的机会。在P2P视频流框架中的实验表明,所提出的设计使网络节点能够在广泛的计算复杂度水平下运行,而更多的低复杂度节点能够加入网络并体验高质量的视频。
{"title":"Complexity-adaptive Random Network Coding for Peer-to-Peer video streaming","authors":"A. Fiandrotti, Simone Zezza, E. Magli","doi":"10.1109/MMSP.2011.6093834","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093834","url":null,"abstract":"We present a novel architecture for complexity-adaptive Random Network Coding (RNC) and its application to Peer-to-Peer (P2P) video streaming. Network coding enables the design of simple and effective P2P video distribution systems, however it relies on computationally intensive packet coding operations that may exceed the computational capabilities of power constrained devices. It is hence desirable that the complexity of network coding can be adjusted at every node according to its computational capabilities, so that different classes of nodes can coexist in the network. To this end, we model the computational complexity of network coding as the sum of a packet decoding cost, which is centrally minimized at the encoder, and a packet recoding cost, which is locally controlled by each node. Efficient network coding is achieved exploiting the packet decoding process as a packet pre-recoding stage, hence increasing the chance that transmitted packets are innovative without increasing the recoding cost. Experiments in a P2P video streaming framework show that the proposed design enables the nodes of the network to operate at a wide range of computational complexity levels, while a higher number of low complexity nodes are able to join the network and experience high-quality video.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134218032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Low-complexity priority based packet scheduling for streaming MPEG-4 SLS 基于低复杂度优先级的流MPEG-4 SLS分组调度
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093826
R. Yu, Dajun Wu, Jianping Chen, S. Rahardja
In this paper, we propose a low-complexity priority based packet scheduling algorithm for streaming MPEG-4 Scalable to Lossless (SLS) encoded audio. In the proposed system, the SLS encoded frames are partitioned into data units of different quality layers, which are transmitted according to their quality contribution to the final decoded audio and their urgency relative to the playback progress. Experimental results show that the proposed scheduling algorithm has an even lower compared to traditional greedy algorithm for packet scheduling, while outperforms them by a significant margin in for terms of quality of the streamed audio.
在本文中,我们提出了一种基于低复杂度优先级的分组调度算法,用于流式传输MPEG-4可扩展到无损(SLS)编码音频。在该系统中,SLS编码帧被划分为不同质量层的数据单元,这些数据单元根据其对最终解码音频的质量贡献以及相对于播放进度的紧迫性进行传输。实验结果表明,与传统的贪心算法相比,该算法在分组调度方面具有更低的效率,而在流音频质量方面则明显优于传统算法。
{"title":"Low-complexity priority based packet scheduling for streaming MPEG-4 SLS","authors":"R. Yu, Dajun Wu, Jianping Chen, S. Rahardja","doi":"10.1109/MMSP.2011.6093826","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093826","url":null,"abstract":"In this paper, we propose a low-complexity priority based packet scheduling algorithm for streaming MPEG-4 Scalable to Lossless (SLS) encoded audio. In the proposed system, the SLS encoded frames are partitioned into data units of different quality layers, which are transmitted according to their quality contribution to the final decoded audio and their urgency relative to the playback progress. Experimental results show that the proposed scheduling algorithm has an even lower compared to traditional greedy algorithm for packet scheduling, while outperforms them by a significant margin in for terms of quality of the streamed audio.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"54 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131470796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Objective evaluation of light field rendering methods using effective sampling density 利用有效采样密度对光场绘制方法进行客观评价
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093799
H. Shidanshidi, F. Safaei, W. Li
Light field rendering (LFR) is an active research area in computer vision and computer graphics. LFR plays a crucial role in free viewpoint video systems (FVV). Several rendering algorithms have been suggested for LFR. However, comparative evaluation of these methods is often limited to subjective assessment of the output. To overcome this problem, this paper presents a geometric measurement, Effective Sampling Density of the scene, referred to as effective sampling for brevity, for objective comparison and evaluation of LFR algorithms. We have derived the effective sampling for the well-known LFR methods. Both theoretical study and numerical simulation have shown that the proposed effective sampling is an effective indicator of the performance for LFR methods.
光场渲染(LFR)是计算机视觉和计算机图形学领域的一个活跃研究领域。LFR在自由视点视频系统(FVV)中起着至关重要的作用。对于LFR,已经提出了几种渲染算法。然而,对这些方法的比较评价往往局限于对产出的主观评价。为了克服这一问题,本文提出了一种几何度量,即场景的有效采样密度(Effective Sampling Density),简称有效采样,用于对LFR算法进行客观比较和评价。我们推导出了众所周知的LFR方法的有效采样。理论研究和数值模拟都表明,有效采样是LFR方法性能的有效指标。
{"title":"Objective evaluation of light field rendering methods using effective sampling density","authors":"H. Shidanshidi, F. Safaei, W. Li","doi":"10.1109/MMSP.2011.6093799","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093799","url":null,"abstract":"Light field rendering (LFR) is an active research area in computer vision and computer graphics. LFR plays a crucial role in free viewpoint video systems (FVV). Several rendering algorithms have been suggested for LFR. However, comparative evaluation of these methods is often limited to subjective assessment of the output. To overcome this problem, this paper presents a geometric measurement, Effective Sampling Density of the scene, referred to as effective sampling for brevity, for objective comparison and evaluation of LFR algorithms. We have derived the effective sampling for the well-known LFR methods. Both theoretical study and numerical simulation have shown that the proposed effective sampling is an effective indicator of the performance for LFR methods.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131636201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Motion parallax based restitution of 3D images on legacy consumer mobile devices 基于运动视差的传统消费者移动设备上的3D图像恢复
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093789
M. Rerábek, Lutz Goldmann, Jong-Seok Lee, T. Ebrahimi
While 3D display technologies are already widely available for cinema and home or corporate use, only a few portable devices currently feature 3D display capabilities. Moreover, the large majority of 3D display solutions rely on binocular perception. In this paper, we study the alternative methods for restitution of 3D images on conventional 2D displays and analyze their respective performance. This particularly includes the extension of wiggle stereoscopy for portable devices which relies on motion parallax as an additional depth cue. The goal of this paper is to compare two different 3D display techniques, the anaglyph method which provides binocular depth cues and a method based on motion parallax, and to show that the motion parallax based approach to present 3D images on consumer 2D portable screen is an equivalent way in comparison to the above mentioned and well-known anaglyph method. The subsequently conducted subjective quality tests show that viewers even prefer wiggle over anaglyph stereoscopy mainly due to a better color reproduction and a comparable depth perception.
虽然3D显示技术已经广泛应用于影院、家庭或企业,但目前只有少数便携式设备具有3D显示功能。此外,绝大多数3D显示解决方案依赖于双眼感知。在本文中,我们研究了在传统的二维显示器上恢复三维图像的替代方法,并分析了它们各自的性能。这尤其包括便携式设备的摆动立体感的扩展,它依赖于运动视差作为额外的深度提示。本文的目的是比较两种不同的3D显示技术,即提供双目深度线索的浮雕方法和基于运动视差的方法,并表明基于运动视差的方法在消费者2D便携式屏幕上呈现3D图像与上述众所周知的浮雕方法相比是一种等效的方法。随后进行的主观质量测试表明,观众甚至更喜欢摇摆立体,主要是由于更好的色彩再现和相当的深度感知。
{"title":"Motion parallax based restitution of 3D images on legacy consumer mobile devices","authors":"M. Rerábek, Lutz Goldmann, Jong-Seok Lee, T. Ebrahimi","doi":"10.1109/MMSP.2011.6093789","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093789","url":null,"abstract":"While 3D display technologies are already widely available for cinema and home or corporate use, only a few portable devices currently feature 3D display capabilities. Moreover, the large majority of 3D display solutions rely on binocular perception. In this paper, we study the alternative methods for restitution of 3D images on conventional 2D displays and analyze their respective performance. This particularly includes the extension of wiggle stereoscopy for portable devices which relies on motion parallax as an additional depth cue. The goal of this paper is to compare two different 3D display techniques, the anaglyph method which provides binocular depth cues and a method based on motion parallax, and to show that the motion parallax based approach to present 3D images on consumer 2D portable screen is an equivalent way in comparison to the above mentioned and well-known anaglyph method. The subsequently conducted subjective quality tests show that viewers even prefer wiggle over anaglyph stereoscopy mainly due to a better color reproduction and a comparable depth perception.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124948068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A compressed domain change detection algorithm for RTP streams in video surveillance applications 视频监控中RTP流的压缩域变化检测算法
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093838
Marcus Laumer, P. Amon, A. Hutter, André Kaup
This paper presents a novel change detection algorithm for the compressed domain. Many video surveillance systems in practical use transmit their video data over a network by using the Real-time Transport Protocol (RTP). Therefore, the presented algorithm concentrates on analyzing RTP streams to detect major changes within contained video content. The paper focuses on a reliable preselection for further analysis modules by decreasing the number of events to be investigated. The algorithm is designed to work on scenes with mainly static background, like in indoor video surveillance streams. The extracted stream elements are RTP timestamps and RTP packet sizes. Both values are directly accessible by efficient byte-reading operations without any further decoding of the video content. Hence, the proposed approach is codec-independent, while at the same time its very low complexity enables the use in extensive video surveillance systems. About 40,000 frames per second of a single RTP stream can be processed on an Intel® CoreTM 2 Duo CPU at 2 GHz and 2 GB RAM, without decreasing the efficiency of the algorithm.
提出了一种新的压缩域变化检测算法。在实际应用中,许多视频监控系统使用实时传输协议(RTP)在网络上传输视频数据。因此,本文提出的算法侧重于分析RTP流,以检测包含视频内容的主要变化。通过减少待研究事件的数量,对进一步的分析模块进行了可靠的预选。该算法主要用于静态背景的场景,如室内视频监控流。提取的流元素是RTP时间戳和RTP数据包大小。这两个值都可以通过高效的字节读取操作直接访问,而无需对视频内容进行进一步解码。因此,所提出的方法是编解码器无关的,同时其非常低的复杂性使其能够在广泛的视频监控系统中使用。在2 GHz和2 GB RAM的Intel®CoreTM 2 Duo CPU上可以处理每秒约40,000帧的单个RTP流,而不会降低算法的效率。
{"title":"A compressed domain change detection algorithm for RTP streams in video surveillance applications","authors":"Marcus Laumer, P. Amon, A. Hutter, André Kaup","doi":"10.1109/MMSP.2011.6093838","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093838","url":null,"abstract":"This paper presents a novel change detection algorithm for the compressed domain. Many video surveillance systems in practical use transmit their video data over a network by using the Real-time Transport Protocol (RTP). Therefore, the presented algorithm concentrates on analyzing RTP streams to detect major changes within contained video content. The paper focuses on a reliable preselection for further analysis modules by decreasing the number of events to be investigated. The algorithm is designed to work on scenes with mainly static background, like in indoor video surveillance streams. The extracted stream elements are RTP timestamps and RTP packet sizes. Both values are directly accessible by efficient byte-reading operations without any further decoding of the video content. Hence, the proposed approach is codec-independent, while at the same time its very low complexity enables the use in extensive video surveillance systems. About 40,000 frames per second of a single RTP stream can be processed on an Intel® CoreTM 2 Duo CPU at 2 GHz and 2 GB RAM, without decreasing the efficiency of the algorithm.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125311430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
ECG data compression based on wave atom transform 基于波原子变换的心电数据压缩
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093793
Hongteng Xu, Guangtao Zhai
In this paper, a new ECG signal compression algorithm based on wave atom transform is presented. According to an assumption that ECG is oscillatory signal, we decompose ECG signal by wave atoms and trimmed insignificant coefficients. The wave atom decomposition has been proved to have a significantly sparser solution than other existing transform methods when it comes to oscillatory signal. In our experiment, the convergence of the energy of wave atoms' coefficients is faster than that of wavelet indeed. The most significant advantage of our algorithm is that unlike many conventional methods, the performance of our algorithm is not dependent on QRS detection, which simplifies the architecture of compression system and is beneficial to telemedicine application. After wave atom transform, the data stream is divided and coded by a hybrid entropy coding strategy combining delta coding, run-length-coding and arithmetic coding. The experimental results on MIT-BIH arrhythmia database proved that our algorithm has high compression ratio (CR > 10) with percentage root mean square difference (PRD) under 1%.
提出了一种新的基于波原子变换的心电信号压缩算法。根据心电是振荡信号的假设,对心电信号进行波原子分解,并对不显著系数进行裁剪。当涉及到振荡信号时,波原子分解已被证明比其他现有的变换方法具有明显的稀疏解。在我们的实验中,波原子系数能量的收敛速度确实比小波的收敛速度快。该算法最大的优点是与许多传统方法不同,该算法的性能不依赖于QRS检测,简化了压缩系统的架构,有利于远程医疗应用。经过波原子变换后的数据流,采用增量编码、行程编码和算术编码相结合的混合熵编码策略对数据流进行分割和编码。在MIT-BIH心律失常数据库上的实验结果证明,该算法具有较高的压缩比(CR bbb10),且百分比均方根差(PRD)小于1%。
{"title":"ECG data compression based on wave atom transform","authors":"Hongteng Xu, Guangtao Zhai","doi":"10.1109/MMSP.2011.6093793","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093793","url":null,"abstract":"In this paper, a new ECG signal compression algorithm based on wave atom transform is presented. According to an assumption that ECG is oscillatory signal, we decompose ECG signal by wave atoms and trimmed insignificant coefficients. The wave atom decomposition has been proved to have a significantly sparser solution than other existing transform methods when it comes to oscillatory signal. In our experiment, the convergence of the energy of wave atoms' coefficients is faster than that of wavelet indeed. The most significant advantage of our algorithm is that unlike many conventional methods, the performance of our algorithm is not dependent on QRS detection, which simplifies the architecture of compression system and is beneficial to telemedicine application. After wave atom transform, the data stream is divided and coded by a hybrid entropy coding strategy combining delta coding, run-length-coding and arithmetic coding. The experimental results on MIT-BIH arrhythmia database proved that our algorithm has high compression ratio (CR > 10) with percentage root mean square difference (PRD) under 1%.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128103075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A low-rank matrix completion based intra prediction for H.264/AVC 基于H.264/AVC的低秩矩阵补全的帧内预测
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093848
Jin Wang, Yunhui Shi, Wenpeng Ding, Baocai Yin
Intra prediction plays an important role in reducing the spatial redundancy for intra frame encoding in H.264/AVC. In this paper, we propose a low-rank matrix completion based intra prediction to improve the prediction efficiency. According to the low-rank matrix completion theory, a low-rank matrix can be exactly recovered from quite limited samples with high probability under mild conditions. After moderate rearrangement and organization, image blocks can be represented as low-rank or approximately low-rank matrix. The intra prediction can then be formulated as a matrix completion problem, thus the unknown pixels can be inferred from limited samples with very high accuracy. Specifically, we novelly rearrange the encoded blocks similar to the current block to generate an observation matrix, from which the prediction can be obtained by solving a low-rank minimization problem. Experimental results demonstrate that the proposed scheme can achieve averagely 5.39% bit-rate saving for CIF sequences and 4.21% for QCIF sequences compared with standard H.264/AVC.
帧内预测在减少H.264/AVC帧内编码的空间冗余方面起着重要的作用。为了提高预测效率,本文提出了一种基于低秩矩阵补全的图像内预测方法。根据低秩矩阵补全理论,在较温和的条件下,低秩矩阵可以在相当有限的样本中以高概率精确地恢复出来。经过适度的重排和组织,图像块可以表示为低秩或近似低秩矩阵。然后可以将内部预测公式化为矩阵补全问题,因此可以从有限的样本中以非常高的精度推断未知像素。具体而言,我们将与当前块相似的编码块进行新颖的重新排列,生成观测矩阵,通过求解低秩最小化问题得到预测。实验结果表明,与标准H.264/AVC相比,该方案可实现CIF序列的平均比特率节约5.39%,QCIF序列的平均比特率节约4.21%。
{"title":"A low-rank matrix completion based intra prediction for H.264/AVC","authors":"Jin Wang, Yunhui Shi, Wenpeng Ding, Baocai Yin","doi":"10.1109/MMSP.2011.6093848","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093848","url":null,"abstract":"Intra prediction plays an important role in reducing the spatial redundancy for intra frame encoding in H.264/AVC. In this paper, we propose a low-rank matrix completion based intra prediction to improve the prediction efficiency. According to the low-rank matrix completion theory, a low-rank matrix can be exactly recovered from quite limited samples with high probability under mild conditions. After moderate rearrangement and organization, image blocks can be represented as low-rank or approximately low-rank matrix. The intra prediction can then be formulated as a matrix completion problem, thus the unknown pixels can be inferred from limited samples with very high accuracy. Specifically, we novelly rearrange the encoded blocks similar to the current block to generate an observation matrix, from which the prediction can be obtained by solving a low-rank minimization problem. Experimental results demonstrate that the proposed scheme can achieve averagely 5.39% bit-rate saving for CIF sequences and 4.21% for QCIF sequences compared with standard H.264/AVC.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116565647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
QoE-driven live and on-demand LTE uplink video transmission qos驱动的实时和点播LTE上行视频传输
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093821
A. E. Essaili, Liang Zhou, Damien Schroeder, E. Steinbach, W. Kellerer
We consider the joint upstreaming of live and on-demand user-generated video content over LTE using a Quality-of-Experience driven approach. We contribute to the state-of-the-art work on multimedia scheduling in three aspects: 1) we jointly optimize the transmission of live and time-shifted video under scarce uplink resources by transmitting a basic quality in realtime and uploading a refined quality for on-demand consumption. 2) We propose a producer-consumer deadline-aware scheduling algorithm that incorporates both the physical state of the mobile producer (e.g., cache fullness) and the scheduled playout time at the end-user. 3) We show that the scheduling decisions in 1) and 2) can be determined locally for each mobile producer. We additionally present an analytical framework for de-centralized scalable video transmission and prove that there exists an optimal solution to our problem. Simulation results for LTE uplink further demonstrate the significance of our proposed optimization on the overall user experience.
我们考虑使用体验质量驱动的方法,在LTE上联合直播和点播用户生成的视频内容。我们在三个方面为当前最先进的多媒体调度工作做出了贡献:1)在上行资源稀缺的情况下,通过实时传输基本质量,上传精细化质量供点播消费,共同优化直播视频和时移视频的传输。2)我们提出了一种生产者-消费者截止日期感知调度算法,该算法结合了移动生产者的物理状态(例如,缓存满度)和最终用户的计划播放时间。3)我们证明了(1)和(2)中的调度决策对于每个移动生产者都是局部确定的。此外,我们还提出了一个去中心化可扩展视频传输的分析框架,并证明存在一个最优解决方案。LTE上行链路的仿真结果进一步证明了我们提出的优化对整体用户体验的重要性。
{"title":"QoE-driven live and on-demand LTE uplink video transmission","authors":"A. E. Essaili, Liang Zhou, Damien Schroeder, E. Steinbach, W. Kellerer","doi":"10.1109/MMSP.2011.6093821","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093821","url":null,"abstract":"We consider the joint upstreaming of live and on-demand user-generated video content over LTE using a Quality-of-Experience driven approach. We contribute to the state-of-the-art work on multimedia scheduling in three aspects: 1) we jointly optimize the transmission of live and time-shifted video under scarce uplink resources by transmitting a basic quality in realtime and uploading a refined quality for on-demand consumption. 2) We propose a producer-consumer deadline-aware scheduling algorithm that incorporates both the physical state of the mobile producer (e.g., cache fullness) and the scheduled playout time at the end-user. 3) We show that the scheduling decisions in 1) and 2) can be determined locally for each mobile producer. We additionally present an analytical framework for de-centralized scalable video transmission and prove that there exists an optimal solution to our problem. Simulation results for LTE uplink further demonstrate the significance of our proposed optimization on the overall user experience.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129786274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Mixed-resolution Wyner-Ziv video coding based on selective data pruning 基于选择性数据修剪的混合分辨率Wyner-Ziv视频编码
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093784
T. Phan, Yuichi Tanaka, Madoka Hasegawa, Shigeo Kato
In current distributed video coding (DVC), interpolation is performed at the decoder and the interpolated pixels are reconstructed by using error-correcting codes, such as Turbo codes and LDPC. There are two possibilities for downsampling video sequences at the encoder: temporally or spatially. Traditionally temporal downsampling, i.e., frame dropping, is used for DVC. Furthermore, those with spatial downsampling (scaling) have been investigated. Unfortunately, most of them are based on uniform downsampling. Due to this, details in video sequences are often discarded. For example, edges and textured regions are difficult to interpolate, and thus require many parity bits to restore the interpolated portions for the spatial domain DVC. In this paper, we propose a new spatial domain DVC based on adaptive line dropping so-called selective data pruning (SDP). SDP is a simple nonuniform downsampling method. The pruned lines are determined to avoid cutting across edges and textures. Experimental results show the proposed method outperforms a conventional DVC for sequences with a large amount of motions.
在当前的分布式视频编码(DVC)中,在解码器处进行插值,并使用纠错码(如Turbo码和LDPC码)重建插值后的像素。在编码器下采样视频序列有两种可能:时间上的或空间上的。传统上,时间下采样,即丢帧,用于DVC。此外,还研究了空间下采样(缩放)的情况。不幸的是,它们中的大多数都是基于均匀下采样。因此,视频序列中的细节常常被丢弃。例如,边缘和纹理区域很难插值,因此需要许多奇偶校验位来恢复空间域DVC的插值部分。本文提出了一种新的基于自适应丢线的空间域DVC,即选择性数据修剪(SDP)。SDP是一种简单的非均匀下采样方法。修剪的线条是确定的,以避免切割跨边缘和纹理。实验结果表明,对于具有大量运动的序列,该方法优于传统的DVC方法。
{"title":"Mixed-resolution Wyner-Ziv video coding based on selective data pruning","authors":"T. Phan, Yuichi Tanaka, Madoka Hasegawa, Shigeo Kato","doi":"10.1109/MMSP.2011.6093784","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093784","url":null,"abstract":"In current distributed video coding (DVC), interpolation is performed at the decoder and the interpolated pixels are reconstructed by using error-correcting codes, such as Turbo codes and LDPC. There are two possibilities for downsampling video sequences at the encoder: temporally or spatially. Traditionally temporal downsampling, i.e., frame dropping, is used for DVC. Furthermore, those with spatial downsampling (scaling) have been investigated. Unfortunately, most of them are based on uniform downsampling. Due to this, details in video sequences are often discarded. For example, edges and textured regions are difficult to interpolate, and thus require many parity bits to restore the interpolated portions for the spatial domain DVC. In this paper, we propose a new spatial domain DVC based on adaptive line dropping so-called selective data pruning (SDP). SDP is a simple nonuniform downsampling method. The pruned lines are determined to avoid cutting across edges and textures. Experimental results show the proposed method outperforms a conventional DVC for sequences with a large amount of motions.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133062275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Compression of VQM features for low bit-rate video quality monitoring 压缩VQM特性,用于低比特率视频质量监控
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093809
Mina Makar, Y. Lin, A. Araújo, B. Girod
Reduced reference video quality assessment techniques provide a practical and convenient way of evaluating the quality of a processed video. In this paper, we propose a method to efficiently compress standardized VQM (Video Quality Model) [1] features to bit-rates that are small relative to the transmitted video. This is achieved through two stages of compression. In the first stage, we remove the redundancy in the features by only transmitting the necessary original video features at the lowest acceptable resolution for the calculation of the final VQM value. The second stage involves using the features of the processed video at the receiver as side-information for efficient entropy coding and reconstruction of the original video features. Experimental results demonstrate that our approach achieves high compression ratios of more than 30× with small error in the final VQM values.
简化参考视频质量评价技术为评价处理后的视频质量提供了一种实用方便的方法。在本文中,我们提出了一种有效压缩标准化VQM (Video Quality Model,视频质量模型)[1]特征到相对于传输视频而言较小的比特率的方法。这是通过两个压缩阶段实现的。在第一阶段,我们通过仅以最低可接受的分辨率传输必要的原始视频特征来消除特征中的冗余,以计算最终的VQM值。第二阶段是利用接收端处理视频的特征作为侧信息进行有效的熵编码和原始视频特征的重建。实验结果表明,该方法实现了30倍以上的高压缩比,且最终VQM值误差很小。
{"title":"Compression of VQM features for low bit-rate video quality monitoring","authors":"Mina Makar, Y. Lin, A. Araújo, B. Girod","doi":"10.1109/MMSP.2011.6093809","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093809","url":null,"abstract":"Reduced reference video quality assessment techniques provide a practical and convenient way of evaluating the quality of a processed video. In this paper, we propose a method to efficiently compress standardized VQM (Video Quality Model) [1] features to bit-rates that are small relative to the transmitted video. This is achieved through two stages of compression. In the first stage, we remove the redundancy in the features by only transmitting the necessary original video features at the lowest acceptable resolution for the calculation of the final VQM value. The second stage involves using the features of the processed video at the receiver as side-information for efficient entropy coding and reconstruction of the original video features. Experimental results demonstrate that our approach achieves high compression ratios of more than 30× with small error in the final VQM values.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123767185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2011 IEEE 13th International Workshop on Multimedia Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1