首页 > 最新文献

2011 IEEE 13th International Workshop on Multimedia Signal Processing最新文献

英文 中文
Joint opportunistic spectrum access and scheduling for layered multicasting over cognitive radio networks 认知无线网络分层多播的联合机会频谱接入和调度
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093836
P. Polacek, Ting-Yeu Yang, Chih-Wei Huang
Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.
认知无线电(CR)代表了一种令人兴奋的频谱利用新范式,并可能为爆炸性多媒体流量提供更多带宽。本文重点研究了CR网络上的层编码视频多播问题,并提出了一种基于质量的机会频谱接入(OSA)子信道选择排序方法,以及一种针对CR网络设计的机会分层多播(OLM)调度方法。OSA中的两步排序利用周期性感知和预测的结果来扩大系统范围的吞吐量,同时保持可接受的碰撞率。通过跟踪跨CR子通道的组接收率和数据过期时间,我们能够在更具挑战性的CR环境中实现OLM的优势。总体联合机会频谱接入与调度(OSAS)算法通过寻找精确的传输参数,启发式地达到系统效用最大化。将OSAS与非完全机会方法进行比较,结果表明OSAS是性能最好的方法。
{"title":"Joint opportunistic spectrum access and scheduling for layered multicasting over cognitive radio networks","authors":"P. Polacek, Ting-Yeu Yang, Chih-Wei Huang","doi":"10.1109/MMSP.2011.6093836","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093836","url":null,"abstract":"Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134424814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Low-delay distributed multiple description coding for error-resilient video transmission 面向容错视频传输的低延迟分布式多描述编码
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093823
Wenhui Liu, K. R. Vijayanagar, Joohee Kim
In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.
本文结合多描述编码(MDC)和分布式视频编码(DVC)的原理,提出了一种低延迟分布式多描述编码(LD-DMDC)方法,进一步提高了DVC的容错性。该方法基于WZ帧的离散余弦变换(DCT)系数的重复和交替,利用H.264/AVC的关键帧的分散灵活宏块排序(FMO)来生成两种描述。该方法有效地利用跳跃块来利用连续帧之间的时间冗余,并采用二进制算术编码而不是迭代信道编码来减少系统延迟。仿真结果表明,该方法对传输误差具有较强的鲁棒性,同时保持较低的编码器复杂度和较低的系统延迟。
{"title":"Low-delay distributed multiple description coding for error-resilient video transmission","authors":"Wenhui Liu, K. R. Vijayanagar, Joohee Kim","doi":"10.1109/MMSP.2011.6093823","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093823","url":null,"abstract":"In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133150623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multimedia storage security in cloud computing: An overview 云计算中的多媒体存储安全概述
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093775
Chun-Ting Huang, Zhongyuan Qin, C.-C. Jay Kuo
In this work, we conduct an in-depth survey on recent multimedia storage security research activities in association with cloud computing. After an overview of the cloud storage system and its security problem, we focus on four hot research topics. They are data integrity, data confidentiality, access control, and data manipulation in the encrypted domain. We describe several key ideas and solutions proposed in the current literature and point out possible extensions and futuristic research opportunities. Our research objective is to offer a state-of-the-art knowledge to new researchers who would like to enter this exciting new field.
在这项工作中,我们对最近与云计算相关的多媒体存储安全研究活动进行了深入的调查。在概述了云存储系统及其安全问题之后,我们重点关注了四个热点研究课题。它们是加密域中的数据完整性、数据机密性、访问控制和数据操作。我们描述了当前文献中提出的几个关键思想和解决方案,并指出了可能的扩展和未来的研究机会。我们的研究目标是为想要进入这个令人兴奋的新领域的新研究人员提供最先进的知识。
{"title":"Multimedia storage security in cloud computing: An overview","authors":"Chun-Ting Huang, Zhongyuan Qin, C.-C. Jay Kuo","doi":"10.1109/MMSP.2011.6093775","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093775","url":null,"abstract":"In this work, we conduct an in-depth survey on recent multimedia storage security research activities in association with cloud computing. After an overview of the cloud storage system and its security problem, we focus on four hot research topics. They are data integrity, data confidentiality, access control, and data manipulation in the encrypted domain. We describe several key ideas and solutions proposed in the current literature and point out possible extensions and futuristic research opportunities. Our research objective is to offer a state-of-the-art knowledge to new researchers who would like to enter this exciting new field.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126618596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Wyner-Ziv frame parallel decoding based on multicore processors 基于多核处理器的Wyner-Ziv帧并行解码
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093835
Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando
Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.
Wyner-Ziv视频编码提供了一种新的低复杂度视频编码范式。然而,Wyner-Ziv范式在解码器端积累了很高的复杂性,这可能会给有延迟要求的应用程序带来困难。另一方面,技术进步为我们提供了支持并行数据处理的新硬件。本文提出了一种基于多核处理器的更快的Wyner-Ziv视频解码方案。通过这种方式,每个帧通过多个处理单元之间的协作进行解码,实现了高达71%的时间减少,而没有明显的速率失真下降损失。
{"title":"Wyner-Ziv frame parallel decoding based on multicore processors","authors":"Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando","doi":"10.1109/MMSP.2011.6093835","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093835","url":null,"abstract":"Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Depth camera assisted multiterminal video coding 深度摄像头辅助多终端视频编码
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093808
Yifu Zhang, Yang Yang, Zixiang Xiong
This paper addresses multiterminal video coding with the help of a low-resolution depth camera. In this setup, the depth sequence, together with the high-resolution texture sequences, are collected, compressed and transmitted separately to the joint decoder in order to obtain more accurate depth information and consequently better rate-distortion performance. At the decoder end, side information (for Wyner-Ziv coding) is generated based on successive refinement of the decompressed low-resolution depth map and texture frame warped from other terminals. Experimental results show sum-rate savings with the depth camera than without for the same PSNR performance. Comparisons to simulcast and JMVM coding are also provided. Although the sum-rate gain of our multiterminal video coding scheme (with or without the depth camera) over simulcast is relatively small, this work is the first that incorporates depth camera in a multiterminal setting.
本文在低分辨率深度摄像机的帮助下研究了多终端视频编码。在该方案中,深度序列与高分辨率纹理序列分别被采集、压缩并传输到联合解码器中,以获得更准确的深度信息,从而获得更好的率失真性能。在解码器端,根据从其他终端扭曲的解压缩的低分辨率深度图和纹理帧的连续细化,生成侧信息(用于Wyner-Ziv编码)。实验结果表明,在相同的PSNR性能下,使用深度相机比不使用深度相机节省了和速率。还提供了与联播和JMVM编码的比较。虽然我们的多终端视频编码方案(带或不带深度摄像机)的和速率增益相对较小,但这项工作是第一个在多终端设置中结合深度摄像机的工作。
{"title":"Depth camera assisted multiterminal video coding","authors":"Yifu Zhang, Yang Yang, Zixiang Xiong","doi":"10.1109/MMSP.2011.6093808","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093808","url":null,"abstract":"This paper addresses multiterminal video coding with the help of a low-resolution depth camera. In this setup, the depth sequence, together with the high-resolution texture sequences, are collected, compressed and transmitted separately to the joint decoder in order to obtain more accurate depth information and consequently better rate-distortion performance. At the decoder end, side information (for Wyner-Ziv coding) is generated based on successive refinement of the decompressed low-resolution depth map and texture frame warped from other terminals. Experimental results show sum-rate savings with the depth camera than without for the same PSNR performance. Comparisons to simulcast and JMVM coding are also provided. Although the sum-rate gain of our multiterminal video coding scheme (with or without the depth camera) over simulcast is relatively small, this work is the first that incorporates depth camera in a multiterminal setting.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Epitome-based image compression using translational sub-pel mapping 基于外延图的图像压缩
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093786
S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez
This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.
本文研究了图像压缩中的缩影构造问题。首先描述了一种优化的缩影构建方法,其中以全像素和亚像素精度依次执行缩影和相关图像重建。然后讨论了完整的静态图像压缩方案,并详细介绍了一些创新工具。这种基于epitome的压缩方法获得的PSNR-rate性能明显高于H.264 Intra和最先进的epitome构建方法获得的PSNR-rate性能。与H.264 Intra相比,比特率节省高达16%。
{"title":"Epitome-based image compression using translational sub-pel mapping","authors":"S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez","doi":"10.1109/MMSP.2011.6093786","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093786","url":null,"abstract":"This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A comparative study of image correlation models for directional two-dimensional sources 定向二维源图像相关模型的比较研究
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093812
Shuyuan Zhu, B. Zeng
The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.
不可分karhunen - lo变换(KLT)已被证明是最优编码的方向二维源,其中主要方向信息既不是水平也不是垂直。然而,KLT依赖于图像数据,很难在实际的图像/视频编码应用中应用。为了解决这一问题,需要建立图像相关模型,该模型需要适应方向信息,以便于二维不可分变换的设计。本文比较了实际中常用的两种模型:绝对距离模型和欧几里得距离模型。为此,基于这两种模型进行了理论分析和实验研究,结果表明欧几里得距离模型始终优于绝对距离模型。
{"title":"A comparative study of image correlation models for directional two-dimensional sources","authors":"Shuyuan Zhu, B. Zeng","doi":"10.1109/MMSP.2011.6093812","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093812","url":null,"abstract":"The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGS quality scalability for HEVC CGS质量可扩展性HEVC
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093816
Zhongbo Shi, Xiaoyan Sun, Jizheng Xu
Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.
可伸缩视频编码提供了一种有效的方式来提供不同质量水平的视频内容。基于新兴的高效视频编码(HEVC)技术的发展,本文提出了两种粗粒度可扩展(CGS)视频编码方案。在方案A中,我们提出了一种利用完全重构的基图进行增强层预测的多环方案。通过将重构的基础图(BP)插入到并置增强层帧的参考图列表中,实现了HEVC的粗粒度质量可扩展性,且变化非常有限。另一方面,方案B支持单循环解码。它包含三个层间预测,类似于H.264/AVC的可扩展扩展。与方案A相比,该方案避免了底层的运动补偿、去块滤波(DF)和自适应环路滤波(ALF),降低了译码复杂度。通过与单层编码和联播的比较,对两种编码方案的有效性进行了评价。
{"title":"CGS quality scalability for HEVC","authors":"Zhongbo Shi, Xiaoyan Sun, Jizheng Xu","doi":"10.1109/MMSP.2011.6093816","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093816","url":null,"abstract":"Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Block-level adaptive optimization for inter-layer texture up-sampling in H.264/SVC H.264/SVC帧间纹理上采样的块级自适应优化
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093811
Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men
H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.
H.264可扩展视频编码(SVC)扩展具有空间可扩展性,能够为单个编码的比特流提供各种分辨率序列。为了减少不同层之间的冗余,对于空间可扩展的编码内帧,对基层(BL)中重构的8×8子宏块进行上采样,以预测增强层(EL)中的宏块(MB)。遗憾的是,目前SVC中使用的简单的一维多相上采样滤波器并不能达到理想的效果,这限制了层间内预测(ILIP)的性能。提出了一种基于维纳滤波的层间纹理上采样自适应优化方法。作为ILIP的附加部分,该方法可以大大降低原始EL信号与上采样BL信号之间的预测误差。实验结果表明,与现有的SVC方法相比,该方法的比特率降低了14.25%,PSNR增加了0.97 dB。
{"title":"Block-level adaptive optimization for inter-layer texture up-sampling in H.264/SVC","authors":"Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men","doi":"10.1109/MMSP.2011.6093811","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093811","url":null,"abstract":"H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"20 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local learning-based image super-resolution 基于局部学习的图像超分辨率
Pub Date : 2011-12-01 DOI: 10.1109/MMSP.2011.6093843
Xiaoqiang Lu, Haoliang Yuan, Yuan Yuan, Pingkun Yan, Luoqing Li, Xuelong Li
Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.
局部学习算法在单帧超分辨率重建算法中得到了广泛的应用,如邻居嵌入算法[1]和局部保持约束算法[2]。邻域嵌入算法基于流形假设,定义嵌入的邻域补丁包含在单个流形中。然而多方面的假设并不总是成立。本文提出了一种基于核脊回归的局部学习图像单帧SR重建算法。首先,采用Gabor滤波器提取低分辨率斑块的纹理信息作为特征;其次,每个输入的低分辨率特征patch利用K近邻算法生成一个局部结构。最后,利用KRR学习从输入的低分辨率(LR)特征补丁到相应局部结构的高分辨率(HR)特征补丁的映射。实验结果表明了该方法的有效性。
{"title":"Local learning-based image super-resolution","authors":"Xiaoqiang Lu, Haoliang Yuan, Yuan Yuan, Pingkun Yan, Luoqing Li, Xuelong Li","doi":"10.1109/MMSP.2011.6093843","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093843","url":null,"abstract":"Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116963308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2011 IEEE 13th International Workshop on Multimedia Signal Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1