T. Phan, Yuichi Tanaka, Madoka Hasegawa, Shigeo Kato
{"title":"Mixed-resolution Wyner-Ziv video coding based on selective data pruning","authors":"T. Phan, Yuichi Tanaka, Madoka Hasegawa, Shigeo Kato","doi":"10.1109/MMSP.2011.6093784","DOIUrl":null,"url":null,"abstract":"In current distributed video coding (DVC), interpolation is performed at the decoder and the interpolated pixels are reconstructed by using error-correcting codes, such as Turbo codes and LDPC. There are two possibilities for downsampling video sequences at the encoder: temporally or spatially. Traditionally temporal downsampling, i.e., frame dropping, is used for DVC. Furthermore, those with spatial downsampling (scaling) have been investigated. Unfortunately, most of them are based on uniform downsampling. Due to this, details in video sequences are often discarded. For example, edges and textured regions are difficult to interpolate, and thus require many parity bits to restore the interpolated portions for the spatial domain DVC. In this paper, we propose a new spatial domain DVC based on adaptive line dropping so-called selective data pruning (SDP). SDP is a simple nonuniform downsampling method. The pruned lines are determined to avoid cutting across edges and textures. Experimental results show the proposed method outperforms a conventional DVC for sequences with a large amount of motions.","PeriodicalId":214459,"journal":{"name":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 13th International Workshop on Multimedia Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2011.6093784","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In current distributed video coding (DVC), interpolation is performed at the decoder and the interpolated pixels are reconstructed by using error-correcting codes, such as Turbo codes and LDPC. There are two possibilities for downsampling video sequences at the encoder: temporally or spatially. Traditionally temporal downsampling, i.e., frame dropping, is used for DVC. Furthermore, those with spatial downsampling (scaling) have been investigated. Unfortunately, most of them are based on uniform downsampling. Due to this, details in video sequences are often discarded. For example, edges and textured regions are difficult to interpolate, and thus require many parity bits to restore the interpolated portions for the spatial domain DVC. In this paper, we propose a new spatial domain DVC based on adaptive line dropping so-called selective data pruning (SDP). SDP is a simple nonuniform downsampling method. The pruned lines are determined to avoid cutting across edges and textures. Experimental results show the proposed method outperforms a conventional DVC for sequences with a large amount of motions.