Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456252
E. Alexiou, T. Ebrahimi
Recent advances in depth sensing and display technologies, along with the significant growth of interest for augmented and virtual reality applications, lay the foundation for the rapid evolution of applications that provide immersive experiences. In such applications, advanced content representations are required in order to increase the engagement of the user with the displayed imageries. Point clouds have emerged as a promising solution to this aim, due to their efficiency in capturing, storing, delivering and rendering of 3D immersive contents. As in any type of imaging, the evaluation of point clouds in terms of visual quality is essential. In this paper, benchmarking results of the state-of-the-art objective metrics in geometry-only point clouds are reported and analyzed under two different types of geometry degradations, namely Gaussian noise and octree- based compression. Human ratings obtained from two subjective experiments are used as the ground truth. Our results show that most objective quality metrics perform well in the presence of noise, whereas one particular method has high predictive power and outperforms the others after octree-based encoding.
{"title":"Benchmarking of Objective Quality Metrics for Colorless Point Clouds","authors":"E. Alexiou, T. Ebrahimi","doi":"10.1109/PCS.2018.8456252","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456252","url":null,"abstract":"Recent advances in depth sensing and display technologies, along with the significant growth of interest for augmented and virtual reality applications, lay the foundation for the rapid evolution of applications that provide immersive experiences. In such applications, advanced content representations are required in order to increase the engagement of the user with the displayed imageries. Point clouds have emerged as a promising solution to this aim, due to their efficiency in capturing, storing, delivering and rendering of 3D immersive contents. As in any type of imaging, the evaluation of point clouds in terms of visual quality is essential. In this paper, benchmarking results of the state-of-the-art objective metrics in geometry-only point clouds are reported and analyzed under two different types of geometry degradations, namely Gaussian noise and octree- based compression. Human ratings obtained from two subjective experiments are used as the ground truth. Our results show that most objective quality metrics perform well in the presence of noise, whereas one particular method has high predictive power and outperforms the others after octree-based encoding.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"41 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120918293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456243
Haiqiang Wang, Xinfeng Zhang, Chao Yang, C.-C. Jay Kuo
The just-noticeable-difference (JND) visual perception property has received much attention in characterizing human subjective viewing experience of compressed video. In this work, we quantity the JND-based video quality assessment model using the satisfied user ratio (SUR) curve, and show that the SUR model can be greatly simplified since the JND points of multiple subjects for the same content in the VideoSet can be well modeled by the normal distribution. Then, we design an SUR prediction method with video quality degradation features and masking features and use them to predict the first, second and the third JND points and their corresponding SUR curves. Finally, we verify the performance of the proposed SUR prediction method with different configurations on the VideoSet. The experimental results demonstrate that the proposed SUR prediction method achieves good performance in various resolutions with the mean absolute error (MAE) of the SUR smaller than 0.05 on average.
{"title":"Analysis and Prediction of JND-Based Video Quality Model","authors":"Haiqiang Wang, Xinfeng Zhang, Chao Yang, C.-C. Jay Kuo","doi":"10.1109/PCS.2018.8456243","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456243","url":null,"abstract":"The just-noticeable-difference (JND) visual perception property has received much attention in characterizing human subjective viewing experience of compressed video. In this work, we quantity the JND-based video quality assessment model using the satisfied user ratio (SUR) curve, and show that the SUR model can be greatly simplified since the JND points of multiple subjects for the same content in the VideoSet can be well modeled by the normal distribution. Then, we design an SUR prediction method with video quality degradation features and masking features and use them to predict the first, second and the third JND points and their corresponding SUR curves. Finally, we verify the performance of the proposed SUR prediction method with different configurations on the VideoSet. The experimental results demonstrate that the proposed SUR prediction method achieves good performance in various resolutions with the mean absolute error (MAE) of the SUR smaller than 0.05 on average.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456296
B. Vishwanath, K. Rose, Yuwen He, Yan Ye
Spherical video is becoming prevalent in virtual and augmented reality applications. With the increased field of view, spherical video needs enormous amounts of data, obviously demanding efficient compression. Existing approaches simply project the spherical content onto a plane to facilitate the use of standard video coders. Earlier work at UCSB was motivated by the realization that existing approaches are suboptimal due to warping introduced by the projection, yielding complex non-linear motion that is not captured by the simple translational motion model employed in standard coders. Moreover, motion vectors in the projected domain do not offer a physically meaningful model. The proposed remedy was to capture the motion directly on the sphere with a rotational motion model, in terms of sphere rotations along geodesics. The rotational motion model preserves the shape and size of objects on the sphere. This paper implements and tests the main ideas from the previous work [1] in the context of a full-fledged, unconstrained coder including, in particular, bi-prediction, multiple reference frames and motion vector refinement. Experimental results provide evidence for considerable gains over HEVC.
{"title":"Rotational Motion Compensated Prediction in HEVC Based Omnidirectional Video Coding","authors":"B. Vishwanath, K. Rose, Yuwen He, Yan Ye","doi":"10.1109/PCS.2018.8456296","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456296","url":null,"abstract":"Spherical video is becoming prevalent in virtual and augmented reality applications. With the increased field of view, spherical video needs enormous amounts of data, obviously demanding efficient compression. Existing approaches simply project the spherical content onto a plane to facilitate the use of standard video coders. Earlier work at UCSB was motivated by the realization that existing approaches are suboptimal due to warping introduced by the projection, yielding complex non-linear motion that is not captured by the simple translational motion model employed in standard coders. Moreover, motion vectors in the projected domain do not offer a physically meaningful model. The proposed remedy was to capture the motion directly on the sphere with a rotational motion model, in terms of sphere rotations along geodesics. The rotational motion model preserves the shape and size of objects on the sphere. This paper implements and tests the main ideas from the previous work [1] in the context of a full-fledged, unconstrained coder including, in particular, bi-prediction, multiple reference frames and motion vector refinement. Experimental results provide evidence for considerable gains over HEVC.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121571741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456264
Johannes Erfurt, Wang-Q Lim, H. Schwarz, D. Marpe, T. Wiegand
In video coding, adaptive loop filter (ALF) has attracted attention due to its increasing coding performances. Recently ALF has been further developed for its extension, which introduces geometry transformation-based adaptive loop filter (GALF) outperforming the existing ALF techniques. The main idea of ALF is to apply a classification to obtain multiple classes, which gives a partition of a set of all pixel locations. After that, a Wiener filter is applied for each class. Therefore, the performance of ALF essentially relies on how its classification behaves. In this paper, we introduce a novel classification method, Multiple feature-based Classifications ALF (MCALF) extending a classification in GALF and show that it increases coding efficiency while only marginally raising encoding complexity. The key idea is to apply more than one classifier at the encoder to group all reconstructed samples and then to select a classifier with the best RD-performance to carry out the classification process. Simulation results show that around 2% bit rate reduction can be achieved on top of GALF for some selected test sequences.
{"title":"Multiple Feature-based Classifications Adaptive Loop Filter","authors":"Johannes Erfurt, Wang-Q Lim, H. Schwarz, D. Marpe, T. Wiegand","doi":"10.1109/PCS.2018.8456264","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456264","url":null,"abstract":"In video coding, adaptive loop filter (ALF) has attracted attention due to its increasing coding performances. Recently ALF has been further developed for its extension, which introduces geometry transformation-based adaptive loop filter (GALF) outperforming the existing ALF techniques. The main idea of ALF is to apply a classification to obtain multiple classes, which gives a partition of a set of all pixel locations. After that, a Wiener filter is applied for each class. Therefore, the performance of ALF essentially relies on how its classification behaves. In this paper, we introduce a novel classification method, Multiple feature-based Classifications ALF (MCALF) extending a classification in GALF and show that it increases coding efficiency while only marginally raising encoding complexity. The key idea is to apply more than one classifier at the encoder to group all reconstructed samples and then to select a classifier with the best RD-performance to carry out the classification process. Simulation results show that around 2% bit rate reduction can be achieved on top of GALF for some selected test sequences.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121032200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456275
Maxime Bichon, J. L. Tanou, M. Ropert, W. Hamidouche, L. Morin, Lu Zhang
Hybrid video coding systems use spatial and temporal predictions in order to remove redundancies within the video source signal. These predictions create coding-scheme-related dependencies, often neglected for sake of simplicity. The R-D Spatio-Temporal Adaptive Quantization (RDSTQ) solution uses such dependencies to achieve better coding efficiency. It models the temporal distortion propagation by estimating the probability of a Coding Unit (CU) to be Inter coded. Uased on this probability, each CU is given a weight depending on its relative importance compared to other CUs. However, the initial approach roughly estimates the Inter probability and does not take into account the Skip mode characteristics in the propagation. It induces important Target uitrate Deviation (TBD) compared to the reference target rate. This paper provides undeniable improvements of the original RDSTQ model in using a more accurate estimation of the Inter probability. Then a new analytical solution for local quantizers is obtained by introducing the Skip probability of a CU into the temporal distortion propagation model. The proposed solution brings −2.05% BD-BR gain in average over the RDSTQ at low rate, which corresponds to −13.54% BD-BR gain in average against no local quantization. Moreover, the TBD is reduced from 38% to 14%.
{"title":"Temporal Adaptive Quantization using Accurate Estimations of Inter and Skip Probabilities","authors":"Maxime Bichon, J. L. Tanou, M. Ropert, W. Hamidouche, L. Morin, Lu Zhang","doi":"10.1109/PCS.2018.8456275","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456275","url":null,"abstract":"Hybrid video coding systems use spatial and temporal predictions in order to remove redundancies within the video source signal. These predictions create coding-scheme-related dependencies, often neglected for sake of simplicity. The R-D Spatio-Temporal Adaptive Quantization (RDSTQ) solution uses such dependencies to achieve better coding efficiency. It models the temporal distortion propagation by estimating the probability of a Coding Unit (CU) to be Inter coded. Uased on this probability, each CU is given a weight depending on its relative importance compared to other CUs. However, the initial approach roughly estimates the Inter probability and does not take into account the Skip mode characteristics in the propagation. It induces important Target uitrate Deviation (TBD) compared to the reference target rate. This paper provides undeniable improvements of the original RDSTQ model in using a more accurate estimation of the Inter probability. Then a new analytical solution for local quantizers is obtained by introducing the Skip probability of a CU into the temporal distortion propagation model. The proposed solution brings −2.05% BD-BR gain in average over the RDSTQ at low rate, which corresponds to −13.54% BD-BR gain in average against no local quantization. Moreover, the TBD is reduced from 38% to 14%.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122234098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456314
H. Kadu, Qing Song, Guan-Ming Su
There are different kinds of high dynamic range (HDR) displays in the market today. These displays have different HDR specifications, like, pealddark brightness levels, electro-optical transfer functions (EOTF), color spaces etc. For the best visual experience on a given HDR screen, colorists have to grade videos for that specific display’s luminance range. Uut simultaneous transmission of multiple video bitstreams graded at different luminance ranges, is inefficient in terms of network utility and server storage. To overcome this problem, we propose transmitting our progressive metadata with a base layer video bitstream. This embedding allows different overlapping portions of metadata to scale the base video to progressively wider luminance ranges. Our progressive metadata format provides a significant design improvement over the existing architectures, preserves colorist intent at all the supported brightness ranges and still keeps the bandwidth or storage overhead minimal.
{"title":"Single Layer Progressive Coding for High Dynamic Range Videos","authors":"H. Kadu, Qing Song, Guan-Ming Su","doi":"10.1109/PCS.2018.8456314","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456314","url":null,"abstract":"There are different kinds of high dynamic range (HDR) displays in the market today. These displays have different HDR specifications, like, pealddark brightness levels, electro-optical transfer functions (EOTF), color spaces etc. For the best visual experience on a given HDR screen, colorists have to grade videos for that specific display’s luminance range. Uut simultaneous transmission of multiple video bitstreams graded at different luminance ranges, is inefficient in terms of network utility and server storage. To overcome this problem, we propose transmitting our progressive metadata with a base layer video bitstream. This embedding allows different overlapping portions of metadata to scale the base video to progressively wider luminance ranges. Our progressive metadata format provides a significant design improvement over the existing architectures, preserves colorist intent at all the supported brightness ranges and still keeps the bandwidth or storage overhead minimal.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123176520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456295
Kazunori Uruma, Shunsuke Takasu, Keiko Masuda, S. Hangai
Recently, super-resolution techniques have been energetically studied for the purpose of reusing the low resolution image contents. Although a lot of approaches to achieve the appropriate super-resolution have been proposed such as non-linear filtering, total variation regularization, deep learning etc., the characteristic of the viewpoint distribution of the observer has not been effectively utilized. Because applying super-resolution to unimportant regions in an image may hinder the observer’s attention to seeing the display, it leads to a low subjective evaluation. This paper proposes the region-wise super-resolution algorithm based on the view-point distribution of observer. However, we cannot obtain the viewpoint distribution map for an image without the pre-experiment using the device such as eye mark recorder, therefore, the saliency map is utilized in this paper. Numerical examples show that the proposed algorithm using saliency map achieves a higher subjective evaluation than the previous study based on the non-linear filtering based super-resolution. Furthermore, in numerical examples, the proposed algorithm using the saliency map is shown to give the similar results of the algorithm using the viewpoint distribution map obtained by the pre-experiment using eye mark recorder.
{"title":"Region-Wise Super-Resolution Algorithm Based On the Viewpoint Distribution","authors":"Kazunori Uruma, Shunsuke Takasu, Keiko Masuda, S. Hangai","doi":"10.1109/PCS.2018.8456295","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456295","url":null,"abstract":"Recently, super-resolution techniques have been energetically studied for the purpose of reusing the low resolution image contents. Although a lot of approaches to achieve the appropriate super-resolution have been proposed such as non-linear filtering, total variation regularization, deep learning etc., the characteristic of the viewpoint distribution of the observer has not been effectively utilized. Because applying super-resolution to unimportant regions in an image may hinder the observer’s attention to seeing the display, it leads to a low subjective evaluation. This paper proposes the region-wise super-resolution algorithm based on the view-point distribution of observer. However, we cannot obtain the viewpoint distribution map for an image without the pre-experiment using the device such as eye mark recorder, therefore, the saliency map is utilized in this paper. Numerical examples show that the proposed algorithm using saliency map achieves a higher subjective evaluation than the previous study based on the non-linear filtering based super-resolution. Furthermore, in numerical examples, the proposed algorithm using the saliency map is shown to give the similar results of the algorithm using the viewpoint distribution map obtained by the pre-experiment using eye mark recorder.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123436034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456283
Yikai Zhao, Jiangtao Wen
The emerging AV1 coding standard brings even higher computational complexity than current coding standards, but does not support traditional Wavefront Parallel Processing (WPP) approach due to the lacking of syntax support. In this paper we introduced a novel framework to implement WPP for AV1 encoder that is compatible with current decoder without additional bitstream syntax support, where mode selection is processed in wavefront parallel before entropy encoding and entropy contexts for rate-distortion optimization are predicted. Based on this framework, context prediction algorithms that use same data dependency model as previous works in H.264 and HEVC are implemented. Furthermore, we proposed an optimal context prediction algorithm specifically for AV1. Experimental results showed that our framework with proposed optimal algorithm yields good parallelism and scalability (over 10x speed-up with 16 threads for 4k sequences) with little coding performance loss (less than 0.2% bitrate increasing).
{"title":"Wavefront Parallel Processing for AV1 Encoder","authors":"Yikai Zhao, Jiangtao Wen","doi":"10.1109/PCS.2018.8456283","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456283","url":null,"abstract":"The emerging AV1 coding standard brings even higher computational complexity than current coding standards, but does not support traditional Wavefront Parallel Processing (WPP) approach due to the lacking of syntax support. In this paper we introduced a novel framework to implement WPP for AV1 encoder that is compatible with current decoder without additional bitstream syntax support, where mode selection is processed in wavefront parallel before entropy encoding and entropy contexts for rate-distortion optimization are predicted. Based on this framework, context prediction algorithms that use same data dependency model as previous works in H.264 and HEVC are implemented. Furthermore, we proposed an optimal context prediction algorithm specifically for AV1. Experimental results showed that our framework with proposed optimal algorithm yields good parallelism and scalability (over 10x speed-up with 16 threads for 4k sequences) with little coding performance loss (less than 0.2% bitrate increasing).","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"40 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133163323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/PCS.2018.8456302
Liwei Guo, J. D. Cock, A. Aaron
Video compression standard H.264/AVC was released in 2003, and has been dominating the industry for the past decade. Over the last few years, a number of next-generation standards/formats like VP9 (2012), H.265/HEVC (2013) and AV1 (2018) were introduced, all claiming significant improvement over H.264/AVC. In this paper, we present our evaluation of the performance of these compression standards. Our evaluation is conducted using open-source encoder implementations of these standards, x264 (for H.264/AVC), x265 (for H.265/HEVC), libvpx (for VP9) and aomenc (for AV1). The process is designed to evaluate the attainable compression efficiency for on-demand adaptive streaming applications. Results with two different quality metrics, PSNR and VMAF, are reported. Our results reveal that x265, libvpx and aomenc all achieve substantial compression efficiency improvement over x264.
{"title":"Compression Performance Comparison of x264, x265, libvpx and aomenc for On-Demand Adaptive Streaming Applications","authors":"Liwei Guo, J. D. Cock, A. Aaron","doi":"10.1109/PCS.2018.8456302","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456302","url":null,"abstract":"Video compression standard H.264/AVC was released in 2003, and has been dominating the industry for the past decade. Over the last few years, a number of next-generation standards/formats like VP9 (2012), H.265/HEVC (2013) and AV1 (2018) were introduced, all claiming significant improvement over H.264/AVC. In this paper, we present our evaluation of the performance of these compression standards. Our evaluation is conducted using open-source encoder implementations of these standards, x264 (for H.264/AVC), x265 (for H.265/HEVC), libvpx (for VP9) and aomenc (for AV1). The process is designed to evaluate the attainable compression efficiency for on-demand adaptive streaming applications. Results with two different quality metrics, PSNR and VMAF, are reported. Our results reveal that x265, libvpx and aomenc all achieve substantial compression efficiency improvement over x264.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127613113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}