Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093836
P. Polacek, Ting-Yeu Yang, Chih-Wei Huang
Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.
{"title":"Joint opportunistic spectrum access and scheduling for layered multicasting over cognitive radio networks","authors":"P. Polacek, Ting-Yeu Yang, Chih-Wei Huang","doi":"10.1109/MMSP.2011.6093836","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093836","url":null,"abstract":"Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134424814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093823
Wenhui Liu, K. R. Vijayanagar, Joohee Kim
In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.
{"title":"Low-delay distributed multiple description coding for error-resilient video transmission","authors":"Wenhui Liu, K. R. Vijayanagar, Joohee Kim","doi":"10.1109/MMSP.2011.6093823","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093823","url":null,"abstract":"In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133150623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093775
Chun-Ting Huang, Zhongyuan Qin, C.-C. Jay Kuo
In this work, we conduct an in-depth survey on recent multimedia storage security research activities in association with cloud computing. After an overview of the cloud storage system and its security problem, we focus on four hot research topics. They are data integrity, data confidentiality, access control, and data manipulation in the encrypted domain. We describe several key ideas and solutions proposed in the current literature and point out possible extensions and futuristic research opportunities. Our research objective is to offer a state-of-the-art knowledge to new researchers who would like to enter this exciting new field.
{"title":"Multimedia storage security in cloud computing: An overview","authors":"Chun-Ting Huang, Zhongyuan Qin, C.-C. Jay Kuo","doi":"10.1109/MMSP.2011.6093775","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093775","url":null,"abstract":"In this work, we conduct an in-depth survey on recent multimedia storage security research activities in association with cloud computing. After an overview of the cloud storage system and its security problem, we focus on four hot research topics. They are data integrity, data confidentiality, access control, and data manipulation in the encrypted domain. We describe several key ideas and solutions proposed in the current literature and point out possible extensions and futuristic research opportunities. Our research objective is to offer a state-of-the-art knowledge to new researchers who would like to enter this exciting new field.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126618596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093835
Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando
Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.
{"title":"Wyner-Ziv frame parallel decoding based on multicore processors","authors":"Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando","doi":"10.1109/MMSP.2011.6093835","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093835","url":null,"abstract":"Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093808
Yifu Zhang, Yang Yang, Zixiang Xiong
This paper addresses multiterminal video coding with the help of a low-resolution depth camera. In this setup, the depth sequence, together with the high-resolution texture sequences, are collected, compressed and transmitted separately to the joint decoder in order to obtain more accurate depth information and consequently better rate-distortion performance. At the decoder end, side information (for Wyner-Ziv coding) is generated based on successive refinement of the decompressed low-resolution depth map and texture frame warped from other terminals. Experimental results show sum-rate savings with the depth camera than without for the same PSNR performance. Comparisons to simulcast and JMVM coding are also provided. Although the sum-rate gain of our multiterminal video coding scheme (with or without the depth camera) over simulcast is relatively small, this work is the first that incorporates depth camera in a multiterminal setting.
{"title":"Depth camera assisted multiterminal video coding","authors":"Yifu Zhang, Yang Yang, Zixiang Xiong","doi":"10.1109/MMSP.2011.6093808","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093808","url":null,"abstract":"This paper addresses multiterminal video coding with the help of a low-resolution depth camera. In this setup, the depth sequence, together with the high-resolution texture sequences, are collected, compressed and transmitted separately to the joint decoder in order to obtain more accurate depth information and consequently better rate-distortion performance. At the decoder end, side information (for Wyner-Ziv coding) is generated based on successive refinement of the decompressed low-resolution depth map and texture frame warped from other terminals. Experimental results show sum-rate savings with the depth camera than without for the same PSNR performance. Comparisons to simulcast and JMVM coding are also provided. Although the sum-rate gain of our multiterminal video coding scheme (with or without the depth camera) over simulcast is relatively small, this work is the first that incorporates depth camera in a multiterminal setting.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116016138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093786
S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez
This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.
{"title":"Epitome-based image compression using translational sub-pel mapping","authors":"S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez","doi":"10.1109/MMSP.2011.6093786","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093786","url":null,"abstract":"This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093812
Shuyuan Zhu, B. Zeng
The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.
{"title":"A comparative study of image correlation models for directional two-dimensional sources","authors":"Shuyuan Zhu, B. Zeng","doi":"10.1109/MMSP.2011.6093812","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093812","url":null,"abstract":"The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093816
Zhongbo Shi, Xiaoyan Sun, Jizheng Xu
Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.
{"title":"CGS quality scalability for HEVC","authors":"Zhongbo Shi, Xiaoyan Sun, Jizheng Xu","doi":"10.1109/MMSP.2011.6093816","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093816","url":null,"abstract":"Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093811
Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men
H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.
{"title":"Block-level adaptive optimization for inter-layer texture up-sampling in H.264/SVC","authors":"Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men","doi":"10.1109/MMSP.2011.6093811","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093811","url":null,"abstract":"H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"20 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.
{"title":"Local learning-based image super-resolution","authors":"Xiaoqiang Lu, Haoliang Yuan, Yuan Yuan, Pingkun Yan, Luoqing Li, Xuelong Li","doi":"10.1109/MMSP.2011.6093843","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093843","url":null,"abstract":"Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116963308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}