On the accuracy of objective image and video quality models: New methodology for performance evaluation

Lukáš Krasula, K. Fliegel, P. Callet, M. Klima
{"title":"On the accuracy of objective image and video quality models: New methodology for performance evaluation","authors":"Lukáš Krasula, K. Fliegel, P. Callet, M. Klima","doi":"10.1109/QoMEX.2016.7498936","DOIUrl":null,"url":null,"abstract":"There are several standard methods for evaluating the performance of models for objective quality assessment with respect to results of subjective tests. However, all of them suffer from one or more of the following drawbacks: They do not consider the uncertainty in the subjective scores, requiring the models to make certain decision where the correct behavior is not known. They are vulnerable to the quality range of the stimuli in the experiments. In order to compare the models, they require a mapping of predicted values to the subjective scores, thus not comparing the models exactly as they are used in the real scenarios. In this paper, new methodology for objective models performance evaluation is proposed. The method is based on determining the classification abilities of the models considering two scenarios inspired by the real applications. It does not suffer from the previously stated drawbacks and enables to easily evaluate the performance on the data from multiple subjective experiments. Moreover, techniques to determine statistical significance of the performance differences are suggested. The proposed framework is tested on several selected metrics and datasets, showing the ability to provide a complementary information about the models' behavior while being in parallel with other state-of-the-art methods.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"357 1","pages":"1-6"},"PeriodicalIF":0.0000,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"75","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QoMEX.2016.7498936","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 75

Abstract

There are several standard methods for evaluating the performance of models for objective quality assessment with respect to results of subjective tests. However, all of them suffer from one or more of the following drawbacks: They do not consider the uncertainty in the subjective scores, requiring the models to make certain decision where the correct behavior is not known. They are vulnerable to the quality range of the stimuli in the experiments. In order to compare the models, they require a mapping of predicted values to the subjective scores, thus not comparing the models exactly as they are used in the real scenarios. In this paper, new methodology for objective models performance evaluation is proposed. The method is based on determining the classification abilities of the models considering two scenarios inspired by the real applications. It does not suffer from the previously stated drawbacks and enables to easily evaluate the performance on the data from multiple subjective experiments. Moreover, techniques to determine statistical significance of the performance differences are suggested. The proposed framework is tested on several selected metrics and datasets, showing the ability to provide a complementary information about the models' behavior while being in parallel with other state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
客观图像和视频质量模型的准确性:性能评价的新方法
有几种标准方法可以根据主观测试的结果来评价客观质量评估模型的性能。然而,它们都有以下一个或多个缺点:它们不考虑主观得分中的不确定性,要求模型在不知道正确行为的情况下做出确定的决策。他们容易受到实验中刺激的质量范围的影响。为了比较模型,他们需要将预测值映射到主观得分,因此不能像在实际场景中使用模型那样精确地比较模型。本文提出了一种新的客观模型性能评价方法。该方法基于考虑实际应用启发的两种场景来确定模型的分类能力。它没有前面所述的缺点,并且能够轻松地评估来自多个主观实验的数据的性能。此外,还提出了确定性能差异统计显著性的技术。所提出的框架在几个选定的指标和数据集上进行了测试,显示了提供关于模型行为的补充信息的能力,同时与其他最先进的方法并行。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Perception and automated assessment of audio quality in user generated content: An improved model Software to Stress Test Image Quality Estimators Closing the gap: Visual quality assessment considering viewing conditions Towards training naïve participants for a perceptual annotation task designed for experts Spatio-temporal error concealment technique for high order multiple description coding schemes including subjective assessment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1