首页 > 最新文献

2018 Picture Coding Symposium (PCS)最新文献

英文 中文
Quantifying the Influence of Devices on Quality of Experience for Video Streaming 量化设备对视频流体验质量的影响
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456304
Jing Li, Lukáš Krasula, P. Callet, Zhi Li, Yoann Baveye
The Internet streaming is changing the way of watching videos for people. Traditional quality assessment on the cable/satellite broadcasting system mainly focused on the perceptual quality. Nowadays, this concept has been extended to Quality of Experience (QoE) which considers also the contextual factors, such as the environment, the display devices, etc. In this study, we focus on the influence of devices on QoE. A subjective experiment was conducted by using our proposed AccAnn methodology. The observers evaluated the QoE of the video sequences by considering their Acceptance and Annoyance. Two devices were used in this study, TV and Tablet. The experimental results showed that the device was a significant influence factor on QoE. In addition, we found that this influence varied with the QoE of the video sequences. To quantify this influence, the Eliminated-By-Aspects model was used. The results could be used for the training of a device-neutral objective QoE metric. For video streaming providers, the quantification results of the influence from devices could be used to optimize the selection of streaming content. On one hand it could satisfy the QoE expectations of the observers according to the used devices, on the other hand it could help to save the bitrates.
互联网流媒体正在改变人们观看视频的方式。传统的有线/卫星广播系统质量评价主要集中在感知质量上。如今,这个概念已经扩展到体验质量(QoE),它还考虑了环境、显示设备等环境因素。在本研究中,我们主要关注设备对QoE的影响。采用我们提出的AccAnn方法进行了主观实验。观察者通过考虑他们的接受和烦恼来评估视频序列的QoE。本研究使用了两种设备,电视和平板电脑。实验结果表明,设备是影响QoE的重要因素。此外,我们发现这种影响随视频序列的QoE而变化。为了量化这种影响,使用了“按方面消除”模型。结果可用于训练器械中立的客观QoE度量。对于视频流媒体提供商而言,设备影响的量化结果可用于优化流媒体内容的选择。一方面可以根据所使用的设备满足观察者的QoE期望,另一方面可以帮助节省比特率。
{"title":"Quantifying the Influence of Devices on Quality of Experience for Video Streaming","authors":"Jing Li, Lukáš Krasula, P. Callet, Zhi Li, Yoann Baveye","doi":"10.1109/PCS.2018.8456304","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456304","url":null,"abstract":"The Internet streaming is changing the way of watching videos for people. Traditional quality assessment on the cable/satellite broadcasting system mainly focused on the perceptual quality. Nowadays, this concept has been extended to Quality of Experience (QoE) which considers also the contextual factors, such as the environment, the display devices, etc. In this study, we focus on the influence of devices on QoE. A subjective experiment was conducted by using our proposed AccAnn methodology. The observers evaluated the QoE of the video sequences by considering their Acceptance and Annoyance. Two devices were used in this study, TV and Tablet. The experimental results showed that the device was a significant influence factor on QoE. In addition, we found that this influence varied with the QoE of the video sequences. To quantify this influence, the Eliminated-By-Aspects model was used. The results could be used for the training of a device-neutral objective QoE metric. For video streaming providers, the quantification results of the influence from devices could be used to optimize the selection of streaming content. On one hand it could satisfy the QoE expectations of the observers according to the used devices, on the other hand it could help to save the bitrates.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"418 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116682674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Next Generation Video Coding for Spherical Content 球形内容的下一代视频编码
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456281
Adeel Abbas, David Newman, Srilakshmi Akula, Akhil Konda
Recently, the Joint Video Exploration Team (JVET) issued a Call for Proposals (CFP) for video compression technology that is expected to be successor to HEVC. In this paper, we present some of the technology from our joint response in the 360° video category of CFP. Goal was to keep design as simple as possible, with picture level preprocessing and without 360 specific coding tools. The response is based on a relatively new projection called Rotated Sphere Projection (RSP). RSP splits and surrounds the sphere using two faces that are cropped from Equirectangular Projection (ERP), in the same way as two flat pieces of rubber are stitched to form a tennis ball. This approach allows RSP to get closer to the sphere than Cube Map, achieving more continuity while preserving 3:2 aspect ratio. Our results show an average BDrate Luma coding gain of 10.5% compared to ERP using HEVC.
最近,联合视频探索小组(JVET)发布了一份视频压缩技术的提案征集(CFP),该技术有望成为HEVC的继任者。在本文中,我们从我们的联合响应中介绍了CFP的360°视频类中的一些技术。我们的目标是让设计尽可能简单,使用图像级预处理,不使用360特定的编码工具。响应基于一种相对较新的投影,称为旋转球体投影(RSP)。RSP使用从等矩形投影(ERP)裁剪的两个面来分割和包围球体,就像将两块平坦的橡胶缝合成一个网球一样。这种方法使RSP比Cube Map更接近球体,在保持3:2宽高比的同时获得更多的连续性。我们的结果显示,与使用HEVC的ERP相比,平均BDrate Luma编码增益为10.5%。
{"title":"Next Generation Video Coding for Spherical Content","authors":"Adeel Abbas, David Newman, Srilakshmi Akula, Akhil Konda","doi":"10.1109/PCS.2018.8456281","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456281","url":null,"abstract":"Recently, the Joint Video Exploration Team (JVET) issued a Call for Proposals (CFP) for video compression technology that is expected to be successor to HEVC. In this paper, we present some of the technology from our joint response in the 360° video category of CFP. Goal was to keep design as simple as possible, with picture level preprocessing and without 360 specific coding tools. The response is based on a relatively new projection called Rotated Sphere Projection (RSP). RSP splits and surrounds the sphere using two faces that are cropped from Equirectangular Projection (ERP), in the same way as two flat pieces of rubber are stitched to form a tennis ball. This approach allows RSP to get closer to the sphere than Cube Map, achieving more continuity while preserving 3:2 aspect ratio. Our results show an average BDrate Luma coding gain of 10.5% compared to ERP using HEVC.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121750493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Study on the Required Video Bit-rate for 8K 120-Hz HEVC/H.265 Temporal Scalable Coding 8K 120hz HEVC/H.265所需视频比特率的研究临时可伸缩编码
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456288
Yasuko Sugito, Shinya Iwasaki, Kazuhiro Chida, Kazuhisa Iguchi, Kikufumi Kanda, Xuying Lei, H. Miyoshi, Kimihiko Kazui
This paper studies the video bit-rate required for 8K 119.88 Hz (120 Hz) high efficiency video coding (HEVC)/H.265 temporal scalable coding that can partially decode 59.94 Hz (60 Hz) video frames from compressed 120 Hz bit-streams. We compress 8K 120 Hz test sequences using software that emulates our developing HEVC/H.265 encoder and conduct two types of subjective evaluation experiments to investigate the appropriate bit-rate for both 8K 120 and 60 Hz videos for broadcasting purpose. From the results of the experiments, we conclude that the required video bit-rate for 8K 120 Hz temporal scalable coding is estimated to be between 85 and 110 Mbps, which is equivalent to the practical bit-rate for 8K 60 Hz videos, and the appropriate bitrate allocation for the 8K 60 Hz video in 8K 120 Hz temporal scalable video coding at 85 Mbps is presumed to be ∼80 Mbps.
本文研究了8K 119.88 Hz (120 Hz)高效视频编码(HEVC)/H所需的视频比特率。265时间可扩展编码,可以部分解码59.94 Hz (60 Hz)视频帧从压缩120hz比特流。我们使用模拟我们开发的HEVC/H的软件压缩8K 120 Hz测试序列。并进行两种类型的主观评价实验,以研究适合广播目的的8K 120和60hz视频的比特率。从实验结果中,我们得出结论,8K 120hz时间可扩展编码所需的视频比特率估计在85到110 Mbps之间,这相当于8K 60hz视频的实际比特率,并且在8K 120hz时间可扩展视频编码中,8K 60hz视频的适当比特率分配假定为~ 80 Mbps。
{"title":"A Study on the Required Video Bit-rate for 8K 120-Hz HEVC/H.265 Temporal Scalable Coding","authors":"Yasuko Sugito, Shinya Iwasaki, Kazuhiro Chida, Kazuhisa Iguchi, Kikufumi Kanda, Xuying Lei, H. Miyoshi, Kimihiko Kazui","doi":"10.1109/PCS.2018.8456288","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456288","url":null,"abstract":"This paper studies the video bit-rate required for 8K 119.88 Hz (120 Hz) high efficiency video coding (HEVC)/H.265 temporal scalable coding that can partially decode 59.94 Hz (60 Hz) video frames from compressed 120 Hz bit-streams. We compress 8K 120 Hz test sequences using software that emulates our developing HEVC/H.265 encoder and conduct two types of subjective evaluation experiments to investigate the appropriate bit-rate for both 8K 120 and 60 Hz videos for broadcasting purpose. From the results of the experiments, we conclude that the required video bit-rate for 8K 120 Hz temporal scalable coding is estimated to be between 85 and 110 Mbps, which is equivalent to the practical bit-rate for 8K 60 Hz videos, and the appropriate bitrate allocation for the 8K 60 Hz video in 8K 120 Hz temporal scalable video coding at 85 Mbps is presumed to be ∼80 Mbps.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116108396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
PCS 2018 Cover Page 《PCS 2018》封面
Pub Date : 2018-06-01 DOI: 10.1109/pcs.2018.8456292
{"title":"PCS 2018 Cover Page","authors":"","doi":"10.1109/pcs.2018.8456292","DOIUrl":"https://doi.org/10.1109/pcs.2018.8456292","url":null,"abstract":"","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"18 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125731618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compound Split Tree for Video Coding 复合分割树视频编码
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456309
Weijia Zhu, A. Segall
During the exploration of video coding technology for potential next generation standards, the Joint Video Exploration Team (JVET) has been studying quad-tree plus binary-tree (QTBT) partition structures within its Joint Exploration Model (JEM). This QTBT partition structure provides more flexibility compared with the quad-tree only partition structure in HEVC. Here, we further consider the QTBT structure and extended it to allow quad-tree partitioning to be performed both before and after a binary-tree partition. We refer to this structure as a compound split tree (CST). To show the efficacy of the approach, we implemented the method into JEM7. The method achieved 1.25%, 2.11% and 1.87% BD-bitrate savings for Y, U and V components on average under the random-access configuration, respectively.
在为潜在的下一代标准探索视频编码技术的过程中,联合视频探索小组(JVET)一直在研究其联合探索模型(JEM)中的四叉树加二叉树(QTBT)划分结构。与HEVC中仅四叉树的分区结构相比,这种QTBT分区结构提供了更大的灵活性。在这里,我们进一步考虑QTBT结构并对其进行扩展,以允许在二叉树分区之前和之后执行四叉树分区。我们把这种结构称为复合分裂树(CST)。为了证明该方法的有效性,我们在JEM7中实现了该方法。该方法在随机访问配置下,Y、U和V分量的平均bd比特率分别节省了1.25%、2.11%和1.87%。
{"title":"Compound Split Tree for Video Coding","authors":"Weijia Zhu, A. Segall","doi":"10.1109/PCS.2018.8456309","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456309","url":null,"abstract":"During the exploration of video coding technology for potential next generation standards, the Joint Video Exploration Team (JVET) has been studying quad-tree plus binary-tree (QTBT) partition structures within its Joint Exploration Model (JEM). This QTBT partition structure provides more flexibility compared with the quad-tree only partition structure in HEVC. Here, we further consider the QTBT structure and extended it to allow quad-tree partitioning to be performed both before and after a binary-tree partition. We refer to this structure as a compound split tree (CST). To show the efficacy of the approach, we implemented the method into JEM7. The method achieved 1.25%, 2.11% and 1.87% BD-bitrate savings for Y, U and V components on average under the random-access configuration, respectively.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134058435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
SRQM: A Video Quality Metric for Spatial Resolution Adaptation SRQM:一种空间分辨率适应的视频质量度量
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456246
Alex Mackin, Mariana Afonso, Fan Zhang, D. Bull
This paper presents a full reference objective video quality metric (SRQM), which characterises the relationship between variations in spatial resolution and visual quality in the context of adaptive video formats. SRQM uses wavelet decomposition, subband combination with perceptually inspired weights, and spatial pooling, to estimate the relative quality between the frames of a high resolution reference video, and one that has been spatially adapted through a combination of down and upsampling. The uVI-SR video database is used to benchmark SRQM against five commonly-used quality metrics. The database contains 24 diverse video sequences that span a range of spatial resolutions up to UHD-I $(3840times 2160)$. An in- depth analysis demonstrates that SRQM is statistically superior to the other quality metrics for all tested adaptation filters, and all with relatively low computational complexity.
本文提出了一个完整的参考客观视频质量度量(SRQM),它表征了自适应视频格式下空间分辨率变化与视觉质量之间的关系。SRQM使用小波分解、带有感知启发权重的子带组合和空间池化来估计高分辨率参考视频帧与通过上下采样组合进行空间调整的视频帧之间的相对质量。uVI-SR视频数据库用于根据五种常用的质量指标对SRQM进行基准测试。该数据库包含24种不同的视频序列,其空间分辨率可达UHD-I $(3840 × 2160)$。一项深入的分析表明,SRQM在统计上优于所有测试过的自适应滤波器的其他质量度量,并且都具有相对较低的计算复杂度。
{"title":"SRQM: A Video Quality Metric for Spatial Resolution Adaptation","authors":"Alex Mackin, Mariana Afonso, Fan Zhang, D. Bull","doi":"10.1109/PCS.2018.8456246","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456246","url":null,"abstract":"This paper presents a full reference objective video quality metric (SRQM), which characterises the relationship between variations in spatial resolution and visual quality in the context of adaptive video formats. SRQM uses wavelet decomposition, subband combination with perceptually inspired weights, and spatial pooling, to estimate the relative quality between the frames of a high resolution reference video, and one that has been spatially adapted through a combination of down and upsampling. The uVI-SR video database is used to benchmark SRQM against five commonly-used quality metrics. The database contains 24 diverse video sequences that span a range of spatial resolutions up to UHD-I $(3840times 2160)$. An in- depth analysis demonstrates that SRQM is statistically superior to the other quality metrics for all tested adaptation filters, and all with relatively low computational complexity.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132766263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
High Dynamic Range Image Compression Based on Visual Saliency 基于视觉显著性的高动态范围图像压缩
Pub Date : 2018-06-01 DOI: 10.1017/ATSIP.2020.15
Shenda Li, Jin Wang, Qing Zhu
High dynamic range (HDR) image has larger luminance range than conventional low dynamic range (LDR) image, which is more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image compression standard JPEG XT. It decomposes input HDR image into base layer and extension layer. However, this method doesn’t make full use of HVS, causing waste of bits on imperceptible regions to human eyes. In this paper, a visual saliency based HDR image compression scheme is proposed. The saliency map of tone mapped HDR image is first extracted, then is used to guide extension layer encoding. The compression quality is adaptive to the saliency of the coding region of the image. Extensive experimental results show that our method outperforms JPEG XT profile A, B, C, and offers the JPEG compatibility at the same time. Moreover, our method can provide progressive coding of extension layer.
高动态范围(HDR)图像比传统的低动态范围(LDR)图像具有更大的亮度范围,更符合人类视觉系统(HVS)。最近,JPEG委员会发布了一个新的HDR图像压缩标准JPEG XT。将输入HDR图像分解为基础层和扩展层。然而,这种方法没有充分利用HVS,在人眼难以察觉的区域造成比特的浪费。本文提出了一种基于视觉显著性的HDR图像压缩方案。首先提取色调映射HDR图像的显著性映射,然后用于指导扩展层编码。压缩质量与图像编码区域的显著性相适应。大量的实验结果表明,我们的方法优于JPEG XT配置文件A、B、C,同时提供了JPEG兼容性。此外,该方法还可以实现扩展层的渐进式编码。
{"title":"High Dynamic Range Image Compression Based on Visual Saliency","authors":"Shenda Li, Jin Wang, Qing Zhu","doi":"10.1017/ATSIP.2020.15","DOIUrl":"https://doi.org/10.1017/ATSIP.2020.15","url":null,"abstract":"High dynamic range (HDR) image has larger luminance range than conventional low dynamic range (LDR) image, which is more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image compression standard JPEG XT. It decomposes input HDR image into base layer and extension layer. However, this method doesn’t make full use of HVS, causing waste of bits on imperceptible regions to human eyes. In this paper, a visual saliency based HDR image compression scheme is proposed. The saliency map of tone mapped HDR image is first extracted, then is used to guide extension layer encoding. The compression quality is adaptive to the saliency of the coding region of the image. Extensive experimental results show that our method outperforms JPEG XT profile A, B, C, and offers the JPEG compatibility at the same time. Moreover, our method can provide progressive coding of extension layer.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123164943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Simple Prediction Fusion Improves Data-driven Full-Reference Video Quality Assessment Models 一个简单的预测融合改进了数据驱动的全参考视频质量评估模型
Pub Date : 2018-06-01 DOI: 10.1109/PCS.2018.8456293
C. Bampis, A. Bovik, Zhi Li
When developing data-driven video quality assessment algorithms, the size of the available ground truth subjective data may hamper the generalization capabilities of the trained models. Nevertheless, if the application context is known a priori, leveraging data-driven approaches for video quality prediction can deliver promising results. Towards achieving highperforming video quality prediction for compression and scaling artifacts, Netflix developed the Video Multi-method Assessment Fusion (VMAF) Framework, a full-reference prediction system which uses a regression scheme to integrate multiple perceptionmotivated features to predict video quality. However, the current version of VMAF does not fully capture temporal video features relevant to temporal video distortions. To achieve this goal, we developed Ensemble VMAF (E-VMAF): a video quality predictor that combines two models: VMAF and predictions based on entropic differencing features calculated on video frames and frame differences. We demonstrate the improved performance of E-VMAF on various subjective video databases. The proposed model will become available as part of the open source package in https://github. com/Netflix/vmaf.
在开发数据驱动的视频质量评估算法时,可用的真实主观数据的大小可能会阻碍训练模型的泛化能力。然而,如果应用程序上下文是先验的,利用数据驱动的方法进行视频质量预测可以提供有希望的结果。为了实现对压缩和缩放工件的高性能视频质量预测,Netflix开发了视频多方法评估融合(VMAF)框架,这是一个全参考预测系统,它使用回归方案集成多个感知驱动特征来预测视频质量。然而,当前版本的VMAF并不能完全捕获与时间视频失真相关的时间视频特征。为了实现这一目标,我们开发了集成VMAF (E-VMAF):一个视频质量预测器,它结合了两个模型:VMAF和基于视频帧和帧差计算的熵差特征的预测。我们在各种主观视频数据库上演示了E-VMAF的改进性能。建议的模型将作为https://github中开放源代码包的一部分提供。com/Netflix/vmaf。
{"title":"A Simple Prediction Fusion Improves Data-driven Full-Reference Video Quality Assessment Models","authors":"C. Bampis, A. Bovik, Zhi Li","doi":"10.1109/PCS.2018.8456293","DOIUrl":"https://doi.org/10.1109/PCS.2018.8456293","url":null,"abstract":"When developing data-driven video quality assessment algorithms, the size of the available ground truth subjective data may hamper the generalization capabilities of the trained models. Nevertheless, if the application context is known a priori, leveraging data-driven approaches for video quality prediction can deliver promising results. Towards achieving highperforming video quality prediction for compression and scaling artifacts, Netflix developed the Video Multi-method Assessment Fusion (VMAF) Framework, a full-reference prediction system which uses a regression scheme to integrate multiple perceptionmotivated features to predict video quality. However, the current version of VMAF does not fully capture temporal video features relevant to temporal video distortions. To achieve this goal, we developed Ensemble VMAF (E-VMAF): a video quality predictor that combines two models: VMAF and predictions based on entropic differencing features calculated on video frames and frame differences. We demonstrate the improved performance of E-VMAF on various subjective video databases. The proposed model will become available as part of the open source package in https://github. com/Netflix/vmaf.","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114412572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
PCS 2018 TOC
Pub Date : 2018-06-01 DOI: 10.1109/pcs.2018.8456256
{"title":"PCS 2018 TOC","authors":"","doi":"10.1109/pcs.2018.8456256","DOIUrl":"https://doi.org/10.1109/pcs.2018.8456256","url":null,"abstract":"","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115168800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PCS 2018 Author Index PCS 2018作者索引
Pub Date : 2018-06-01 DOI: 10.1109/pcs.2018.8456286
{"title":"PCS 2018 Author Index","authors":"","doi":"10.1109/pcs.2018.8456286","DOIUrl":"https://doi.org/10.1109/pcs.2018.8456286","url":null,"abstract":"","PeriodicalId":433667,"journal":{"name":"2018 Picture Coding Symposium (PCS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114818632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2018 Picture Coding Symposium (PCS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1