Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456262
Daniela Lanz, Jürgen Seiler, Karina Jaskolka, André Kaup
For the lossless compression of dynamic $3-mathrm {D}+mathrm {t}$ volumes as produced by medical devices like Computed Tomography, various coding schemes can be applied. This paper shows that 3-D subband coding outperforms lossless HEVC coding and additionally provides a scalable representation, which is often required in telemedicine applications. However, the resulting lowpass subband, which shall be used as a downscaled representative of the whole original sequence, contains a lot of ghosting artifacts. This can be alleviated by incorporating motion compensation methods into the subband coder. This results in a high quality lowpass subband but also leads to a lower compression ratio. In order to cope with this, we introduce a new approach for improving the compression efficiency of compensated 3-D wavelet lifting by performing denoising in the update step. We are able to reduce the file size of the lowpass subband by up to 1.64%, while the lowpass subband is still applicable for being used as a downscaled representative of the whole original sequence.
{"title":"Compression of Dynamic Medical CT Data Using Motion Compensated Wavelet Lifting with Denoised Update","authors":"Daniela Lanz, Jürgen Seiler, Karina Jaskolka, André Kaup","doi":"10.1109/PCS.2018.8456262","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456262","url":null,"abstract":"For the lossless compression of dynamic $3-mathrm {D}+mathrm {t}$ volumes as produced by medical devices like Computed Tomography, various coding schemes can be applied. This paper shows that 3-D subband coding outperforms lossless HEVC coding and additionally provides a scalable representation, which is often required in telemedicine applications. However, the resulting lowpass subband, which shall be used as a downscaled representative of the whole original sequence, contains a lot of ghosting artifacts. This can be alleviated by incorporating motion compensation methods into the subband coder. This results in a high quality lowpass subband but also leads to a lower compression ratio. In order to cope with this, we introduce a new approach for improving the compression efficiency of compensated 3-D wavelet lifting by performing denoising in the update step. We are able to reduce the file size of the lowpass subband by up to 1.64%, while the lowpass subband is still applicable for being used as a downscaled representative of the whole original sequence.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122579673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456301
T. Biatek, J. Travers, Pierre-Loup Cabarat, W. Hamidouche
Recently, coding of 360° video contents has been investigated in the context of over-the-top streaming services. To be delivered using terrestrial broadcast, it is required to provide backward compatibility of such content to legacy receivers. In this paper, a novel layered coding scheme is proposed to address the delivery of 360° video content over terrestrial broadcast networks. One or several views are extracted from the 360° video and coded as base layers using standard HEVC encoding. Inter-layer reference pictures are built based on projected base-layers and are used in the enhancement layer to encode the 360° video. Experimental results show that the proposed approach provides substantial coding gains of 14.99% compared to simulcast coding and enables limited coding overhead of 5.15% compared to 360° single-layer coding.
{"title":"Backward Compatible Layered Video Coding for 360° Video Broadcast","authors":"T. Biatek, J. Travers, Pierre-Loup Cabarat, W. Hamidouche","doi":"10.1109/PCS.2018.8456301","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456301","url":null,"abstract":"Recently, coding of 360° video contents has been investigated in the context of over-the-top streaming services. To be delivered using terrestrial broadcast, it is required to provide backward compatibility of such content to legacy receivers. In this paper, a novel layered coding scheme is proposed to address the delivery of 360° video content over terrestrial broadcast networks. One or several views are extracted from the 360° video and coded as base layers using standard HEVC encoding. Inter-layer reference pictures are built based on projected base-layers and are used in the enhancement layer to encode the 360° video. Experimental results show that the proposed approach provides substantial coding gains of 14.99% compared to simulcast coding and enables limited coding overhead of 5.15% compared to 360° single-layer coding.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130475971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456250
M. Tok, Rolf Jongebloed, Lieven Lange, Erik Bochinski, T. Sikora
Previous research has shown the interesting properties and potential of Steered Mixtures-of-Experts (SMoE) for image representation, approximation, and compression based on EM optimization. In this paper we introduce an MSE optimization method based on Gradient Descent for training SMoEs. This allows improved optimization towards PSNR and SSIM and de-coupling of experts and gates. In consequence we can now generate very high quality SMoE models with significantly reduced model complexity compared to previous work and much improved edge representations. Uased on this strategy a block-based image coder was developed using Mixture-of-Experts that uses very simple experts with very few model parameters. Experimental evaluations shows that a significant compression gain can be achieved compared to JPEG for low bit rates.
{"title":"An MSE Approach For Training And Coding Steered Mixtures Of Experts","authors":"M. Tok, Rolf Jongebloed, Lieven Lange, Erik Bochinski, T. Sikora","doi":"10.1109/PCS.2018.8456250","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456250","url":null,"abstract":"Previous research has shown the interesting properties and potential of Steered Mixtures-of-Experts (SMoE) for image representation, approximation, and compression based on EM optimization. In this paper we introduce an MSE optimization method based on Gradient Descent for training SMoEs. This allows improved optimization towards PSNR and SSIM and de-coupling of experts and gates. In consequence we can now generate very high quality SMoE models with significantly reduced model complexity compared to previous work and much improved edge representations. Uased on this strategy a block-based image coder was developed using Mixture-of-Experts that uses very simple experts with very few model parameters. Experimental evaluations shows that a significant compression gain can be achieved compared to JPEG for low bit rates.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127721468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456248
Bastian Wandt, Thorsten Laude, B. Rosenhahn, J. Ostermann
In recent years, there has been a tremendous improvement in video coding algorithms. This improvement resulted in 2013 in the standardization of the first version of High Efficiency Video Coding (HEVC) which now forms the state-of-theart with superior coding efficiency. Nevertheless, the development of video coding algorithms did not stop as HEVC still has its limitations. Especially for complex textures HEVC reveals one of its limitations. As these textures are hard to predict, very high bit rates are required to achieve a high quality. Texture synthesis was proposed as solution for this limitation in previous works. However, previous texture synthesis frameworks only prevailed if the decomposition into synthesizable and non-synthesizable regions was either known or very easy. In this paper, we address this scenario with a texture synthesis framework based on detail-aware image decomposition techniques. Our techniques are based on a multiple-steps coarse-to-fine approach in which an initial decomposition is refined with awareness for small details. The efficiency of our approach is evaluated objectively and subjectively: BD-rate gains of up to 28.81% over HEVC and up to 12.75% over the closest related work were achieved. Our subjective tests indicate an improved visual quality in addition to the bit rate savings.
{"title":"Extending HEVC with a Texture Synthesis Framework using Detail-aware Image Decomposition","authors":"Bastian Wandt, Thorsten Laude, B. Rosenhahn, J. Ostermann","doi":"10.1109/PCS.2018.8456248","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456248","url":null,"abstract":"In recent years, there has been a tremendous improvement in video coding algorithms. This improvement resulted in 2013 in the standardization of the first version of High Efficiency Video Coding (HEVC) which now forms the state-of-theart with superior coding efficiency. Nevertheless, the development of video coding algorithms did not stop as HEVC still has its limitations. Especially for complex textures HEVC reveals one of its limitations. As these textures are hard to predict, very high bit rates are required to achieve a high quality. Texture synthesis was proposed as solution for this limitation in previous works. However, previous texture synthesis frameworks only prevailed if the decomposition into synthesizable and non-synthesizable regions was either known or very easy. In this paper, we address this scenario with a texture synthesis framework based on detail-aware image decomposition techniques. Our techniques are based on a multiple-steps coarse-to-fine approach in which an initial decomposition is refined with awareness for small details. The efficiency of our approach is evaluated objectively and subjectively: BD-rate gains of up to 28.81% over HEVC and up to 12.75% over the closest related work were achieved. Our subjective tests indicate an improved visual quality in addition to the bit rate savings.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127878247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456263
Johannes Sauer, M. Wien, J. Schneider, Max Bläser
In 360° video, a complete scene is captured, as it can be seen from a single point in any direction. Since the captured 360 images are spherical, they cannot be converted to planar images without introducing geometric distortions. The nature of these distortion depends on the used projection format.This paper introduces an approach to reduce artifacts occurring when encoding 360° video which has been projected to the faces of a cube. In order to achieve this, the operation of the deblocking filter is modified such that the correct pixels with respect to the 3D geometry are used for filtering of edges.The method is evaluated on the set of sequences defined by the Joint Call for Proposals on Video Compression with Capability beyond HEVC. While the method has almost no impact on the objective coding performance, the visual quality is still clearly enhanced. Edges of the cube, previously visible as coding artifacts, are mostly removed with the proposed method.
{"title":"Geometry-Corrected Deblocking Filter for 360° Video Coding using Cube Representation","authors":"Johannes Sauer, M. Wien, J. Schneider, Max Bläser","doi":"10.1109/PCS.2018.8456263","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456263","url":null,"abstract":"In 360° video, a complete scene is captured, as it can be seen from a single point in any direction. Since the captured 360 images are spherical, they cannot be converted to planar images without introducing geometric distortions. The nature of these distortion depends on the used projection format.This paper introduces an approach to reduce artifacts occurring when encoding 360° video which has been projected to the faces of a cube. In order to achieve this, the operation of the deblocking filter is modified such that the correct pixels with respect to the 3D geometry are used for filtering of edges.The method is evaluated on the set of sequences defined by the Joint Call for Proposals on Video Compression with Capability beyond HEVC. While the method has almost no impact on the objective coding performance, the visual quality is still clearly enhanced. Edges of the cube, previously visible as coding artifacts, are mostly removed with the proposed method.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115246734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456303
T. Goodall, A. Bovik
A variety of powerful picture quality predictors are available that rely on neuro-statistical models of distortion perception. We extend these principles to video source inspection, by coupling spatial divisive normalization with a filterbank tuned for artifact detection, implemented in an augmented sparse functional form. We call this method the Video Impairment Detection by SParse Error CapTure (VIDSPECT). We configure VIDSPECT to create state-of-the-art detectors of two kinds of commonly encountered source video artifacts: upscaling and combing. The system detects upscaling, identifies upscaling type, and predicts the native video resolution. It also detects combing artifacts arising from interlacing. Our approach is simple, highly generalizable, and yields better accuracy than competing methods. A software release of VIDSPECT is available online: http://live.ece.utexas.edu/research/quality/VIDSPECT release.zip for public use and evaluation.
{"title":"Detecting Source Video Artifacts with Supervised Sparse Filters","authors":"T. Goodall, A. Bovik","doi":"10.1109/PCS.2018.8456303","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456303","url":null,"abstract":"A variety of powerful picture quality predictors are available that rely on neuro-statistical models of distortion perception. We extend these principles to video source inspection, by coupling spatial divisive normalization with a filterbank tuned for artifact detection, implemented in an augmented sparse functional form. We call this method the Video Impairment Detection by SParse Error CapTure (VIDSPECT). We configure VIDSPECT to create state-of-the-art detectors of two kinds of commonly encountered source video artifacts: upscaling and combing. The system detects upscaling, identifies upscaling type, and predicts the native video resolution. It also detects combing artifacts arising from interlacing. Our approach is simple, highly generalizable, and yields better accuracy than competing methods. A software release of VIDSPECT is available online: http://live.ece.utexas.edu/research/quality/VIDSPECT release.zip for public use and evaluation.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114178203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456273
Katherine Storrs, S. V. Leuven, S. Kojder, Lucas Theis, Ferenc Huszár
To effectively evaluate subjective visual quality in weakly-controlled environments, we propose an Adaptive Paired Comparison method based on particle filtering. As our approach requires each sample to be rated only once, the test time compared to regular paired comparison can be reduced. The method works with non-experts and improves reliability compared to MOS and DS-MOS methods.
{"title":"Adaptive Paired-Comparison Method for Subjective Video Quality Assessment on Mobile Devices","authors":"Katherine Storrs, S. V. Leuven, S. Kojder, Lucas Theis, Ferenc Huszár","doi":"10.1109/PCS.2018.8456273","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456273","url":null,"abstract":"To effectively evaluate subjective visual quality in weakly-controlled environments, we propose an Adaptive Paired Comparison method based on particle filtering. As our approach requires each sample to be rated only once, the test time compared to regular paired comparison can be reduced. The method works with non-experts and improves reliability compared to MOS and DS-MOS methods.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115193637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456251
Ashek Ahmmed, A. Naman, D. Taubman
Conventional video compression systems use motion model to approximate the geometry of moving object boundaries. Motion model can be relieved from describing discontinuities in the underlying motion field, by employing motion hint that exploits the spatial structure of reference frames to infer appropriate boundaries for the future ones. However, estimation of highly accurate motion hint is computationally demanding, in particular for high resolution video sequences. Leveraging on the advantages of homogeneous motion discovery oriented prediction, in this paper, we propose to tune the intra-domain motion uniformity for B-frames as per the frame’s reference utility. Experimental results show an improved bit rate savings compared to the approach where no such selective tuning is enforced.
{"title":"Enhanced Homogeneous Motion Discovery Oriented Prediction for Key Intermediate Frames","authors":"Ashek Ahmmed, A. Naman, D. Taubman","doi":"10.1109/PCS.2018.8456251","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456251","url":null,"abstract":"Conventional video compression systems use motion model to approximate the geometry of moving object boundaries. Motion model can be relieved from describing discontinuities in the underlying motion field, by employing motion hint that exploits the spatial structure of reference frames to infer appropriate boundaries for the future ones. However, estimation of highly accurate motion hint is computationally demanding, in particular for high resolution video sequences. Leveraging on the advantages of homogeneous motion discovery oriented prediction, in this paper, we propose to tune the intra-domain motion uniformity for B-frames as per the frame’s reference utility. Experimental results show an improved bit rate savings compared to the approach where no such selective tuning is enforced.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116271217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456313
Keng-Shih Lu, Antonio Ortega, D. Mukherjee, Yue Chen
Rate-distortion (RD) optimization is an important tool in many video compression standards and can be used for transform selection. However, this is typically very computationally demanding because a full RD search involves the computation of transform co-efficients for each candidate transform. In this paper, we propose an approach that uses sparse Laplacian operators to estimate the RD cost by computing a weighted squared sum of transform coefficients, without having to compute the actual transform coefficients. We demonstrate experimentally how our method can be applied for transform selection. Implemented in the AV1 encoder, our approach yields a significant speed-up in encoding time with a small increase in bitrate.
{"title":"Efficient Rate-distortion Approximation and Transform Type Selection using Laplacian Operators","authors":"Keng-Shih Lu, Antonio Ortega, D. Mukherjee, Yue Chen","doi":"10.1109/PCS.2018.8456313","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456313","url":null,"abstract":"Rate-distortion (RD) optimization is an important tool in many video compression standards and can be used for transform selection. However, this is typically very computationally demanding because a full RD search involves the computation of transform co-efficients for each candidate transform. In this paper, we propose an approach that uses sparse Laplacian operators to estimate the RD cost by computing a weighted squared sum of transform coefficients, without having to compute the actual transform coefficients. We demonstrate experimentally how our method can be applied for transform selection. Implemented in the AV1 encoder, our approach yields a significant speed-up in encoding time with a small increase in bitrate.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126180720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456266
Liqiang Wang, Benben Niu, Yun He
Transform, a crucial module for hybrid video coding framework, has been selecting Discrete Cosine Transform (DCT) for several decades. Recently, Singular Value Decomposition (SVD) and Enhanced Multiple Transform (EMT) are proposed to improve transform efficiency. However, the perspectives of SVD and EMT are different. SVD enhances transform efficiency by utilizing the similarity of prediction block and inter residual block. EMT adopts some new sinusoidal transform cores to accommodate the larger prediction errors closer to the boundary of prediction unit. In this paper, the proposed method mainly has two key contributions. First, SVD and EMT are combined skillfully. Second, non-square SVD is newly introduced to the original algorithm. By extensive experiments, averages 1.07%, 1.06% and 0.65% BD-rate saving for Y, U and V are achieved compared to JEM5.0.1 with some coding tools off, up to 5.87%, 4.28% and 4.47%.
{"title":"Effective Inter Transform Method Based on QTBT Structure for Future Video Coding","authors":"Liqiang Wang, Benben Niu, Yun He","doi":"10.1109/PCS.2018.8456266","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456266","url":null,"abstract":"Transform, a crucial module for hybrid video coding framework, has been selecting Discrete Cosine Transform (DCT) for several decades. Recently, Singular Value Decomposition (SVD) and Enhanced Multiple Transform (EMT) are proposed to improve transform efficiency. However, the perspectives of SVD and EMT are different. SVD enhances transform efficiency by utilizing the similarity of prediction block and inter residual block. EMT adopts some new sinusoidal transform cores to accommodate the larger prediction errors closer to the boundary of prediction unit. In this paper, the proposed method mainly has two key contributions. First, SVD and EMT are combined skillfully. Second, non-square SVD is newly introduced to the original algorithm. By extensive experiments, averages 1.07%, 1.06% and 0.65% BD-rate saving for Y, U and V are achieved compared to JEM5.0.1 with some coding tools off, up to 5.87%, 4.28% and 4.47%.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122156836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}