Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093836
P. Polacek, Ting-Yeu Yang, Chih-Wei Huang
Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.
{"title":"Joint opportunistic spectrum access and scheduling for layered multicasting over cognitive radio networks","authors":"P. Polacek, Ting-Yeu Yang, Chih-Wei Huang","doi":"10.1109/MMSP.2011.6093836","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093836","url":null,"abstract":"Cognitive radio (CR) represents an exciting new paradigm on spectrum utilization and potentially more bandwidth for exploding multimedia traffic. We focus on the layer encoded video multicast problem over CR and contribute 1) a quality based ranking in opportunistic spectrum access (OSA) for sub-channel selection, and 2) opportunistic layered multicasting (OLM) inspired scheduling designed particularly for CR networks. The 2-step ranking in OSA takes outcomes of periodic sensing and prediction to expand the system-wide throughput while keeping collision rates acceptable. By tracking group receiving rate across CR sub-channels and data expiration time, we are able to realize the OLM advantage under much more challenging CR environments. The overall joint opportunistic spectrum access and scheduling (OSAS) algorithm finds precise transmission parameters to heuristically reach maximum system utility. Favorable results comparing OSAS with not fully opportunistic methods demonstrate OSAS to be the best performing one.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134424814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093823
Wenhui Liu, K. R. Vijayanagar, Joohee Kim
In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.
{"title":"Low-delay distributed multiple description coding for error-resilient video transmission","authors":"Wenhui Liu, K. R. Vijayanagar, Joohee Kim","doi":"10.1109/MMSP.2011.6093823","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093823","url":null,"abstract":"In this paper, a low-delay distributed multiple description coding (LD-DMDC) method that combines the principles of multiple description coding (MDC) and distributed video coding (DVC) has been proposed to further improve the error resilience of DVC. The proposed method generates two descriptions based on duplication and alternation of the discrete cosine transform (DCT) coefficients for the Wyner-Ziv (WZ) frames and by exploiting H.264/AVC's dispersed flexible macroblock ordering (FMO) for the key frames. The proposed method makes efficient use of skip blocks to exploit temporal redundancies between successive frames and employs a binary arithmetic coding instead of iterative channel coding to reduce the system latency. Simulation results show that the proposed method is robust against transmission errors, while maintaining low encoder complexity and low system latency.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133150623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093773
Eugen Wige, Gilbert Yammine, P. Amon, A. Hutter, André Kaup
Compression of noisy image sequences is a hard challenge in video coding. Especially for high quality compression the preprocessing of videos is not possible, as it decreases the objective quality of the videos. In order to overcome this problem, this paper presents an in-loop denoising framework for efficient medium to high fidelity compression of noisy video data. It is shown that using low complexity in-loop noise estimation and noise filtering as well as adaptive selection of the denoised inter frame predictors can improve the compression performance. The proposed algorithm for adaptive selection of the denoised predictor is based on the actual HEVC reference model. The different inter frame prediction modes within the current HEVC reference model are exploited for adaptive selection of denoised prediction by transmission of some side information in combination with decoder side estimation for denoised prediction. The simulation results show considerable gains using the proposed in-loop denoising framework with adaptive selection. In addition the theoretical bounds for the compression efficiency, if we could perfectly estimate the adaptive selection of the denoised prediction in the decoder, are shown in the simulation results.
{"title":"Adaptive in-loop noise-filtered prediction for High Efficiency Video Coding","authors":"Eugen Wige, Gilbert Yammine, P. Amon, A. Hutter, André Kaup","doi":"10.1109/MMSP.2011.6093773","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093773","url":null,"abstract":"Compression of noisy image sequences is a hard challenge in video coding. Especially for high quality compression the preprocessing of videos is not possible, as it decreases the objective quality of the videos. In order to overcome this problem, this paper presents an in-loop denoising framework for efficient medium to high fidelity compression of noisy video data. It is shown that using low complexity in-loop noise estimation and noise filtering as well as adaptive selection of the denoised inter frame predictors can improve the compression performance. The proposed algorithm for adaptive selection of the denoised predictor is based on the actual HEVC reference model. The different inter frame prediction modes within the current HEVC reference model are exploited for adaptive selection of denoised prediction by transmission of some side information in combination with decoder side estimation for denoised prediction. The simulation results show considerable gains using the proposed in-loop denoising framework with adaptive selection. In addition the theoretical bounds for the compression efficiency, if we could perfectly estimate the adaptive selection of the denoised prediction in the decoder, are shown in the simulation results.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134161360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093835
Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando
Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.
{"title":"Wyner-Ziv frame parallel decoding based on multicore processors","authors":"Alberto Corrales-García, José Luis Martínez, G. Fernández-Escribano, F. Quiles, W. Fernando","doi":"10.1109/MMSP.2011.6093835","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093835","url":null,"abstract":"Wyner-Ziv video coding presents a new paradigm which offers low-complexity video encoding. However, the Wyner-Ziv paradigm accumulates high complexity at the decoder side and this could involve difficulties for applications which have delay requisites. On the other hand, technological advances provide us with new hardware which supports parallel data processing. In this paper, a faster Wyner-Ziv video decoding scheme based on multicore processors is proposed. In this way, each frame is decoded by means of the collaboration between several processing units, achieving a time reduction up to 71% without significant rate-distortion drop penalty.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124835858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.
{"title":"Local learning-based image super-resolution","authors":"Xiaoqiang Lu, Haoliang Yuan, Yuan Yuan, Pingkun Yan, Luoqing Li, Xuelong Li","doi":"10.1109/MMSP.2011.6093843","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093843","url":null,"abstract":"Local learning algorithm has been widely used in single-frame super-resolution reconstruction algorithm, such as neighbor embedding algorithm [1] and locality preserving constraints algorithm [2]. Neighbor embedding algorithm is based on manifold assumption, which defines that the embedded neighbor patches are contained in a single manifold. While manifold assumption does not always hold. In this paper, we present a novel local learning-based image single-frame SR reconstruction algorithm with kernel ridge regression (KRR). Firstly, Gabor filter is adopted to extract texture information from low-resolution patches as the feature. Secondly, each input low-resolution feature patch utilizes K nearest neighbor algorithm to generate a local structure. Finally, KRR is employed to learn a map from input low-resolution (LR) feature patches to high-resolution (HR) feature patches in the corresponding local structure. Experimental results show the effectiveness of our method.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116963308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093816
Zhongbo Shi, Xiaoyan Sun, Jizheng Xu
Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.
{"title":"CGS quality scalability for HEVC","authors":"Zhongbo Shi, Xiaoyan Sun, Jizheng Xu","doi":"10.1109/MMSP.2011.6093816","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093816","url":null,"abstract":"Scalable video coding provides an efficient way to serve video contents at different quality levels. Based on the development of emerging High Efficiency Video Coding (HEVC), we propose two coarse granular scalable (CGS) video coding schemes here. In scheme A, we present a multi-loop solution in which the fully reconstructed base pictures are utilized in the enhancement layer prediction. By inserting the reconstructed base picture (BP) into the list of reference pictures of the collocated enhancement layer frame, we enable the coarse granular quality scalability of HEVC with very limited changes. On the other hand, scheme B supports single loop decoding. It contains three inter-layer predictions similar to the scalable extension of H.264/AVC. Compared to scheme A, it decreases the decoding complexity by avoiding the motion compensation, deblocking filtering (DF) and adaptive loop filtering (ALF) in the base layer. The effectiveness of our proposed two coding schemes is evaluated by comparing with single-layer coding and simulcast.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122604625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093811
Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men
H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.
{"title":"Block-level adaptive optimization for inter-layer texture up-sampling in H.264/SVC","authors":"Kan Chang, Tuanfa Qin, Wenhao Zhang, Aidong Men","doi":"10.1109/MMSP.2011.6093811","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093811","url":null,"abstract":"H.264 Scalable Video Coding (SVC) extension has spatial scalability which is able to provide various resolution sequences for a single encoded bit-stream. In order to reduce redundancies between different layers, for spatial scalable intra-coded frames, co-located reconstructed 8×8 sub-macroblock in base layer (BL) is up-sampled to predict the marcoblock (MB) in enhancement layer (EL). Unfortunately, simple 1-D poly-phase up-sampling filter used in current SVC isn't cable of achieving ideal result, which limits the performance of inter-layer intra prediction (ILIP). This paper proposes an adaptive optimization method for inter-layer texture up-sampling by applying wiener filter and controlling it at block level. Working as an additional part of ILIP, the proposed method can greatly reduce the prediction error between the original EL signals and the up-sampled BL signals. Experimental results show that, the proposed method achieves bit rate reduction up to 14.25% and PSNR increment up to 0.97 dB when compared with the traditional method in current SVC.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"20 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123100318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093812
Shuyuan Zhu, B. Zeng
The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.
{"title":"A comparative study of image correlation models for directional two-dimensional sources","authors":"Shuyuan Zhu, B. Zeng","doi":"10.1109/MMSP.2011.6093812","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093812","url":null,"abstract":"The non-separable Karhunen-Loève transform (KLT) has been proven to be optimal for coding a directional 2-D source in which the dominant directional information is neither horizontal nor vertical. However, the KLT depends on the image data, and it is difficult to apply it in a practical image/video coding application. In order to solve this problem, it is necessary to build an image correlation model, and this model needs to adapt to the directional information so as to facilitate the design of 2-D non-separable transforms. In this paper, we compare two models that have been used commonly in practice: the absolute-distance model and the Euclidean-distance model. To this end, theoretical analysis and experimental study are carried out based on these two models, and the results show that the Euclidean-distance model consistently performs better than the absolute-distance model.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093786
S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez
This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.
{"title":"Epitome-based image compression using translational sub-pel mapping","authors":"S. Chérigui, C. Guillemot, D. Thoreau, P. Guillotel, P. Pérez","doi":"10.1109/MMSP.2011.6093786","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093786","url":null,"abstract":"This paper addresses the problem of epitome construction for image compression. An optimized epitome construction method is first described, where the epitome and the associated image reconstruction, are both successively performed at full pel and sub-pel accuracy. The resulting complete still image compression scheme is then discussed with details on some innovative tools. The PSNR-rate performance achieved with this epitome-based compression method is significantly higher than the one obtained with H.264 Intra and with state of the art epitome construction method. A bit-rate saving up to 16% comparatively to H.264 Intra is achieved.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115782767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-12-01DOI: 10.1109/MMSP.2011.6093826
R. Yu, Dajun Wu, Jianping Chen, S. Rahardja
In this paper, we propose a low-complexity priority based packet scheduling algorithm for streaming MPEG-4 Scalable to Lossless (SLS) encoded audio. In the proposed system, the SLS encoded frames are partitioned into data units of different quality layers, which are transmitted according to their quality contribution to the final decoded audio and their urgency relative to the playback progress. Experimental results show that the proposed scheduling algorithm has an even lower compared to traditional greedy algorithm for packet scheduling, while outperforms them by a significant margin in for terms of quality of the streamed audio.
{"title":"Low-complexity priority based packet scheduling for streaming MPEG-4 SLS","authors":"R. Yu, Dajun Wu, Jianping Chen, S. Rahardja","doi":"10.1109/MMSP.2011.6093826","DOIUrl":"https://doi.org/10.1109/MMSP.2011.6093826","url":null,"abstract":"In this paper, we propose a low-complexity priority based packet scheduling algorithm for streaming MPEG-4 Scalable to Lossless (SLS) encoded audio. In the proposed system, the SLS encoded frames are partitioned into data units of different quality layers, which are transmitted according to their quality contribution to the final decoded audio and their urgency relative to the playback progress. Experimental results show that the proposed scheduling algorithm has an even lower compared to traditional greedy algorithm for packet scheduling, while outperforms them by a significant margin in for terms of quality of the streamed audio.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"54 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131470796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}