首页 > 最新文献

2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)最新文献

英文 中文
Impact of test condition selection in adaptive crowdsourcing studies on subjective quality 自适应众包研究中测试条件选择对主观素质的影响
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498939
Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia
Adaptive crowdsourcing is a new approach to crowdsourced Quality of Experience (QoE) studies, which aims to improve the certainty of resulting QoE models by adaptively distributing a fixed budget of user ratings to the test conditions. The main idea of the adaptation is to dynamically allocate the next rating to a condition, for which the submitted ratings so far show a low certainty. This paper investigates the effects of statistical adaptation on the distribution of ratings and the goodness of the resulting QoE models. Thereby, it gives methodological advice how to select test conditions for future crowdsourced QoE studies.
自适应众包是众包体验质量(QoE)研究的一种新方法,旨在通过自适应地将固定的用户评分预算分配到测试条件中,从而提高所得到的QoE模型的确定性。自适应的主要思想是动态地将下一个评级分配给一个条件,到目前为止提交的评级显示出低确定性。本文研究了统计自适应对评级分布的影响以及由此产生的QoE模型的优度。因此,本文为未来众包QoE研究如何选择测试条件提供了方法学上的建议。
{"title":"Impact of test condition selection in adaptive crowdsourcing studies on subjective quality","authors":"Michael Seufert, Ondrej Zach, T. Hossfeld, M. Slanina, P. Tran-Gia","doi":"10.1109/QoMEX.2016.7498939","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498939","url":null,"abstract":"Adaptive crowdsourcing is a new approach to crowdsourced Quality of Experience (QoE) studies, which aims to improve the certainty of resulting QoE models by adaptively distributing a fixed budget of user ratings to the test conditions. The main idea of the adaptation is to dynamically allocate the next rating to a condition, for which the submitted ratings so far show a low certainty. This paper investigates the effects of statistical adaptation on the distribution of ratings and the goodness of the resulting QoE models. Thereby, it gives methodological advice how to select test conditions for future crowdsourced QoE studies.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"49 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84092454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Training listeners for multi-channel audio quality evaluation in MUSHRA with a special focus on loop setting 培训听众的多通道音频质量评估在MUSHRA与环路设置特别关注
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498952
Nadja Schinkel-Bielefeld
Audio quality evaluation for audio material of intermediate and high quality requires expert listeners. In comparison to non-experts, these are not only more critical in their ratings, but also employ different strategies in their evaluation. In particular they concentrate on shorter sections of the audio signal and compare more to the reference than inexperienced listeners. We created a listener training for detecting coding artifacts in multi-channel audio quality evaluation. Our training is targeted at listeners without technical background. For this training, expert listeners commented on smaller sections of an audio signal they focused on in the listening test and provided a description of the artifacts they perceived. The non-expert listeners participating in the training were provided with general advice for helpful strategies in MUSHRA tests (Multi Stimulus Tests with Hidden Reference and Anchor), with the comments on specific sections of the stimulus by the experts, and with feedback after rating. Listener's performance improved in the course of the training session. Afterwards they performed the same test without the training material and a further test with different items. Performance did not decrease in these tests, showing that they could transfer what they had learned to other stimuli. After the training they also set more loops and compared more to the reference.
中高质量音频材料的音质评价需要专业听众。与非专家相比,这些人不仅在评级上更挑剔,而且在评估中采用不同的策略。特别是,他们专注于音频信号的较短部分,并且比没有经验的听众更多地与参考资料进行比较。我们创建了一个用于检测多声道音频质量评估中的编码伪影的听者训练。我们的培训对象是没有技术背景的听众。在这次训练中,专家听众对他们在听力测试中关注的音频信号的一小部分进行了评论,并提供了他们所感知到的伪像的描述。参与培训的非专业听者获得了关于MUSHRA测试(隐含参考和锚点的多刺激测试)中有用策略的一般性建议,专家对刺激的特定部分进行了评论,并在评分后获得了反馈。听者的表现在训练过程中有所提高。之后,他们在没有训练材料的情况下进行了同样的测试,并用不同的项目进行了进一步的测试。在这些测试中,他们的表现并没有下降,这表明他们可以将所学知识转移到其他刺激物上。训练后,他们也设置了更多的循环,并与参考进行了更多的比较。
{"title":"Training listeners for multi-channel audio quality evaluation in MUSHRA with a special focus on loop setting","authors":"Nadja Schinkel-Bielefeld","doi":"10.1109/QoMEX.2016.7498952","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498952","url":null,"abstract":"Audio quality evaluation for audio material of intermediate and high quality requires expert listeners. In comparison to non-experts, these are not only more critical in their ratings, but also employ different strategies in their evaluation. In particular they concentrate on shorter sections of the audio signal and compare more to the reference than inexperienced listeners. We created a listener training for detecting coding artifacts in multi-channel audio quality evaluation. Our training is targeted at listeners without technical background. For this training, expert listeners commented on smaller sections of an audio signal they focused on in the listening test and provided a description of the artifacts they perceived. The non-expert listeners participating in the training were provided with general advice for helpful strategies in MUSHRA tests (Multi Stimulus Tests with Hidden Reference and Anchor), with the comments on specific sections of the stimulus by the experts, and with feedback after rating. Listener's performance improved in the course of the training session. Afterwards they performed the same test without the training material and a further test with different items. Performance did not decrease in these tests, showing that they could transfer what they had learned to other stimuli. After the training they also set more loops and compared more to the reference.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"104 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91016931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
No-reference image quality assessment based on statistics of Local Ternary Pattern 基于局部三元模式统计的无参考图像质量评价
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498959
P. Freitas, W. Y. L. Akamine, Mylène C. Q. Farias
In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.
在本文中,我们提出了一种新的无参考图像质量评估(NR-IQA)方法,该方法使用基于局部三元模式(LTP)描述符的机器学习技术。LTP描述符是局部二值模式(LBP)纹理描述符的泛化,与LBP相比,它提供了显着的性能改进。更具体地说,LTP对均匀区域的噪声影响较小,但对灰度变换不再严格不变。由于其对噪声不敏感,LTP描述符不能检测到较轻微的图像退化。为了解决这个问题,我们提出了一种使用多个LTP通道提取纹理信息的策略。预测算法使用这些LTP通道的直方图作为训练过程的特征。该方法能够盲目预测图像质量,即无参考(NR)方法。结果表明,该方法在保持具有竞争力的图像质量预测精度的同时,比其他先进的无参考方法要快得多。
{"title":"No-reference image quality assessment based on statistics of Local Ternary Pattern","authors":"P. Freitas, W. Y. L. Akamine, Mylène C. Q. Farias","doi":"10.1109/QoMEX.2016.7498959","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498959","url":null,"abstract":"In this paper, we propose a new no-reference image quality assessment (NR-IQA) method that uses a machine learning technique based on Local Ternary Pattern (LTP) descriptors. LTP descriptors are a generalization of Local Binary Pattern (LBP) texture descriptors that provide a significant performance improvement when compared to LBP. More specifically, LTP is less susceptible to noise in uniform regions, but no longer rigidly invariant to gray-level transformation. Due to its insensitivity to noise, LTP descriptors are not able to detect milder image degradation. To tackle this issue, we propose a strategy that uses multiple LTP channels to extract texture information. The prediction algorithm uses the histograms of these LTP channels as features for the training procedure. The proposed method is able to blindly predict image quality, i.e., the method is no-reference (NR). Results show that the proposed method is considerably faster than other state-of-the-art no-reference methods, while maintaining a competitive image quality prediction accuracy.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"53 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77756516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Using individual data to characterize emotional user experience and its memorability: Focus on gender factor 使用个人数据表征情感用户体验及其可记忆性:关注性别因素
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498969
Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet
Delivering the same digital image to several users is not necessarily providing them the same experience. In this study, we focused on how different affective experiences impact the memorability of an image. Forty-nine participants took part in an experiment in which they saw a stream of images conveying various emotions. One day later, they had to recognize the images displayed the day before and rate them according to the positivity/ negativity of the emotional experience the images induced. In order to better appreciate the underlying idiosyncratic factors that affect the experience under test, prior to the test session we collected not only personal information but also results of psychological tests to characterize individuals according to their dominant personality in terms of masculinity-femininity (Bem Sex Role Inventory) and to measure their emotional state. The results show that the way an emotional experience is rated depends on personality rather than biological sex, suggesting that personality could be a mediator in the well-established differences in how males and females experience emotional material. From the collected data, we derive a model including individual factors relevant to characterize the memorability of the images, in particular through the emotional experience they induced.
向多个用户提供相同的数字图像并不一定能为他们提供相同的体验。在这项研究中,我们关注的是不同的情感体验如何影响图像的可记忆性。49名参与者参加了一项实验,在实验中,他们看到了一系列表达各种情绪的图像。一天后,他们必须识别前一天展示的图像,并根据图像引起的情绪体验的积极/消极程度对它们进行评分。为了更好地了解影响测试体验的潜在特质因素,在测试之前,我们不仅收集了个人信息,还收集了心理测试的结果,根据他们的男性气质-女性气质(本姆性别角色量表)来表征他们的主导人格,并测量他们的情绪状态。结果表明,情感体验的评价方式取决于个性而不是生理性别,这表明个性可能是男性和女性体验情感材料的公认差异的中介。从收集到的数据中,我们得出了一个模型,其中包括与图像可记忆性特征相关的个体因素,特别是通过它们引起的情感体验。
{"title":"Using individual data to characterize emotional user experience and its memorability: Focus on gender factor","authors":"Romain Cohendet, Anne-Laure Gilet, Matthieu Perreira Da Silva, P. Callet","doi":"10.1109/QoMEX.2016.7498969","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498969","url":null,"abstract":"Delivering the same digital image to several users is not necessarily providing them the same experience. In this study, we focused on how different affective experiences impact the memorability of an image. Forty-nine participants took part in an experiment in which they saw a stream of images conveying various emotions. One day later, they had to recognize the images displayed the day before and rate them according to the positivity/ negativity of the emotional experience the images induced. In order to better appreciate the underlying idiosyncratic factors that affect the experience under test, prior to the test session we collected not only personal information but also results of psychological tests to characterize individuals according to their dominant personality in terms of masculinity-femininity (Bem Sex Role Inventory) and to measure their emotional state. The results show that the way an emotional experience is rated depends on personality rather than biological sex, suggesting that personality could be a mediator in the well-established differences in how males and females experience emotional material. From the collected data, we derive a model including individual factors relevant to characterize the memorability of the images, in particular through the emotional experience they induced.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"10 24 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79522927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Studying user agreement on aesthetic appeal ratings and its relation with technical knowledge 研究审美吸引力等级的用户协议及其与技术知识的关系
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498934
Pierre R. Lebreton, A. Raake, M. Barkowsky
In this paper, a crowdsourcing experiment was conducted involving different panels of participants. The aim of this study is to evaluate how the preference of one image over another one is related with the knowledge of the participant in photography. In previous work the two discriminant evaluation concepts “presence of a main subject” and “exposure” were found to distinguish group participants with different degrees of knowledge in photography. Each of these groups provided different means of aesthetic appeal ratings when asked to rate on an absolute category scale. The present paper extends previous work by studying preference ratings on a set of image pairs as a function of technical knowledge and more specifically adding a focus on the variance of rating and agreement between participants. The conducted study was composed of two different steps where the participants had to first report their preference of one image over another (paired comparison), and an evaluation of the technical background of the participant using a specific set of images. Based on preference-rating patterns groups of participants were identified. These groups were formed by clustering the participants who saw and shared the same preference rating on images in one group, and the participants with low agreement with other participants in another group. A per-group analysis showed that a high agreement between participants could be observed when participants have technical knowledge. This indicates that higher consistency between participants can be reached when expert users are being recruited, and therefore participants should be carefully selected in image aesthetic appeal evaluation to ensure stable results.
在本文中,我们进行了一个涉及不同小组参与者的众包实验。本研究的目的是评估一个图像对另一个图像的偏好与参与者的摄影知识之间的关系。在之前的研究中,我们发现“主体存在”和“曝光”这两个判别性评价概念可以区分不同程度的摄影知识组参与者。当被要求对绝对类别进行评分时,这些小组中的每一个都提供了不同的审美吸引力评分方式。本论文通过研究一组图像对的偏好评级作为技术知识的函数来扩展先前的工作,更具体地说,增加了对评级和参与者之间协议方差的关注。所进行的研究由两个不同的步骤组成,参与者必须首先报告他们对一幅图像的偏好(配对比较),并使用一组特定的图像评估参与者的技术背景。根据偏好评级模式确定了参与者的分组。这些小组是由一组看到并分享对图片的相同偏好评级的参与者和另一组对其他参与者的不一致的参与者组成的。每组分析表明,当参与者具有技术知识时,可以观察到参与者之间的高度一致性。这表明在招募专家用户时,参与者之间的一致性较高,因此在图像审美吸引力评价中,应仔细选择参与者,以确保结果稳定。
{"title":"Studying user agreement on aesthetic appeal ratings and its relation with technical knowledge","authors":"Pierre R. Lebreton, A. Raake, M. Barkowsky","doi":"10.1109/QoMEX.2016.7498934","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498934","url":null,"abstract":"In this paper, a crowdsourcing experiment was conducted involving different panels of participants. The aim of this study is to evaluate how the preference of one image over another one is related with the knowledge of the participant in photography. In previous work the two discriminant evaluation concepts “presence of a main subject” and “exposure” were found to distinguish group participants with different degrees of knowledge in photography. Each of these groups provided different means of aesthetic appeal ratings when asked to rate on an absolute category scale. The present paper extends previous work by studying preference ratings on a set of image pairs as a function of technical knowledge and more specifically adding a focus on the variance of rating and agreement between participants. The conducted study was composed of two different steps where the participants had to first report their preference of one image over another (paired comparison), and an evaluation of the technical background of the participant using a specific set of images. Based on preference-rating patterns groups of participants were identified. These groups were formed by clustering the participants who saw and shared the same preference rating on images in one group, and the participants with low agreement with other participants in another group. A per-group analysis showed that a high agreement between participants could be observed when participants have technical knowledge. This indicates that higher consistency between participants can be reached when expert users are being recruited, and therefore participants should be carefully selected in image aesthetic appeal evaluation to ensure stable results.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"20 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77465004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluating color difference measures in images 评估图像中的色差测量
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498922
Benhur Ortiz Jaramillo, A. Kumcu, W. Philips
The most well known and widely used method for comparing two homogeneous color samples is the CIEDE2000 color difference formula because of its strong agreement with human perception. However, the formula is unreliable when applied over images and its spatial extensions have shown little improvement compared with the original formula. Hence, researchers have proposed many methods intending to measure color differences (CDs) in natural scene color images. However, these existing methods have not yet been rigorously compared. Therefore, in this work we review and evaluate CD measures with the purpose of answering the question to what extent do state-of-the-art CD measures agree with human perception of CDs in images? To answer the question, we have reviewed and evaluated eight state-of-the-art CD measures on a public image quality database. We found that the CIEDE2000, its spatial extension and the just noticeable CD measure perform well in computing CDs in images distorted by black level shift and color quantization algorithms (correlation higher than 0.8). However, none of the tested CD measures perform well on identifying CDs for the variety of color related distortions tested in this work, e.g., most of the tested CD measures showed a correlation lower than 0.65.
CIEDE2000色差公式是比较两种均匀颜色样本的最著名和最广泛使用的方法,因为它与人类的感知非常吻合。然而,该公式应用于图像时不可靠,其空间扩展与原公式相比改善不大。因此,研究者们提出了许多测量自然场景彩色图像色差(cd)的方法。然而,这些现有的方法还没有经过严格的比较。因此,在这项工作中,我们回顾和评估CD测量,目的是回答这个问题,最先进的CD测量在多大程度上符合人类对图像CD的感知?为了回答这个问题,我们在公共图像质量数据库中审查和评估了八种最先进的CD测量方法。我们发现CIEDE2000及其空间扩展和仅可注意的CD测量在计算被黑电平移位和颜色量化算法扭曲的图像CD时表现良好(相关系数大于0.8)。然而,在本研究中测试的CD测量方法中,没有一种能很好地识别出各种颜色相关失真的CD,例如,大多数测试的CD测量方法显示出低于0.65的相关性。
{"title":"Evaluating color difference measures in images","authors":"Benhur Ortiz Jaramillo, A. Kumcu, W. Philips","doi":"10.1109/QoMEX.2016.7498922","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498922","url":null,"abstract":"The most well known and widely used method for comparing two homogeneous color samples is the CIEDE2000 color difference formula because of its strong agreement with human perception. However, the formula is unreliable when applied over images and its spatial extensions have shown little improvement compared with the original formula. Hence, researchers have proposed many methods intending to measure color differences (CDs) in natural scene color images. However, these existing methods have not yet been rigorously compared. Therefore, in this work we review and evaluate CD measures with the purpose of answering the question to what extent do state-of-the-art CD measures agree with human perception of CDs in images? To answer the question, we have reviewed and evaluated eight state-of-the-art CD measures on a public image quality database. We found that the CIEDE2000, its spatial extension and the just noticeable CD measure perform well in computing CDs in images distorted by black level shift and color quantization algorithms (correlation higher than 0.8). However, none of the tested CD measures perform well on identifying CDs for the variety of color related distortions tested in this work, e.g., most of the tested CD measures showed a correlation lower than 0.65.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"9 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90412485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Visual attention as a dimension of QoE: Subtitles in UHD videos 视觉注意力作为QoE的一个维度:超高清视频中的字幕
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498924
Toinon Vigier, Yoann Baveye, J. Rousseau, P. Callet
With the ever-growing availability of multimedia content produced, broadcast and consumed worldwide, subtitling is becoming an essential service to quickly share understandable content. Simultaneously, the increased resolution of the ultra high definition (UHD) standard comes with wider screens and new viewing conditions. Services as the display of subtitles thus require adaptation to better fit the new induced viewing visual angle. This paper aims at evaluating quality of experience of subtitled movies in UHD to propose guidelines for the appearance of subtitles. From an eye-tracking experiment conducted on 68 observers and 30 video sequences, viewing behavior and visual saliency are analyzed with and without subtitles and for different subtitle styles. Various metrics based on eye-tracking data, such as the Reading Index for Dynamic Texts (RIDT), are computed to objectively measure the ease of reading and subtitle disturbance. The results mainly show that doubling the visual angle of subtitles from HD to UHD guarantees subtitle readability without compromising the enjoyment of the video content.
随着多媒体内容在全球范围内的制作、广播和消费的日益普及,字幕正成为快速分享可理解内容的一项重要服务。同时,超高清(UHD)标准分辨率的提高带来了更宽的屏幕和新的观看条件。服务作为字幕的显示因此需要适应,以更好地适应新的诱导观看视角。本文旨在评估超高清字幕电影的体验质量,为字幕的呈现提出指导方针。通过对68名观察者和30个视频序列进行眼动追踪实验,分析了在有字幕和没有字幕以及不同字幕风格下的观看行为和视觉显著性。基于眼动追踪数据,计算动态文本阅读指数(RIDT)等各种指标,客观地衡量阅读难易程度和字幕干扰程度。结果主要表明,将字幕的视角从高清翻倍到超高清,在不影响视频内容观赏性的前提下,保证了字幕的可读性。
{"title":"Visual attention as a dimension of QoE: Subtitles in UHD videos","authors":"Toinon Vigier, Yoann Baveye, J. Rousseau, P. Callet","doi":"10.1109/QoMEX.2016.7498924","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498924","url":null,"abstract":"With the ever-growing availability of multimedia content produced, broadcast and consumed worldwide, subtitling is becoming an essential service to quickly share understandable content. Simultaneously, the increased resolution of the ultra high definition (UHD) standard comes with wider screens and new viewing conditions. Services as the display of subtitles thus require adaptation to better fit the new induced viewing visual angle. This paper aims at evaluating quality of experience of subtitled movies in UHD to propose guidelines for the appearance of subtitles. From an eye-tracking experiment conducted on 68 observers and 30 video sequences, viewing behavior and visual saliency are analyzed with and without subtitles and for different subtitle styles. Various metrics based on eye-tracking data, such as the Reading Index for Dynamic Texts (RIDT), are computed to objectively measure the ease of reading and subtitle disturbance. The results mainly show that doubling the visual angle of subtitles from HD to UHD guarantees subtitle readability without compromising the enjoyment of the video content.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"32 5 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82764619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Perceptual image quality enhancement for solar radio image 太阳射电图像的感知图像质量增强
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498933
Long Xu, Lin Ma, Zhuo Chen, Xianyou Zeng, Yihua Yan
In solar radio observation, the visualization of data is very important since it can more intuitively and clearly deliver interest information of solar radio activities to astronomers. As to visualization, we highly expect good visual quality of images/videos in favor of the discovery of solar radio events recorded by observation data. The existing imaging system cannot guarantee good visual quality of solar radio data visualization. In this paper, an image quality enhancement algorithm is developed to improve solar radio extreme ultraviolet (EUV) images from Solar Dynamics Observatory (SDO). Firstly, the guided filter is employed to smooth image, which outputs an image with good skeleton and edges. Since the fine structures of solar radio activities are embedded in high frequency components of a solar radio image, we propose a novel structure preserving filtering to amplify the different signal of original input image subtracting smoothed one. Afterwards, fusing the amplified details and smoothed one together, the final enhanced image is generated. The experimental results prove that the image quality is significantly improved by using the proposed image quality enhancement algorithm.
在太阳射电观测中,数据的可视化是非常重要的,因为它可以更直观、更清晰地向天文学家提供太阳射电活动的兴趣信息。在可视化方面,我们高度期望图像/视频具有良好的视觉质量,有利于观测数据记录的太阳射电事件的发现。现有的成像系统无法保证太阳射电数据可视化的良好视觉质量。针对太阳动力学观测台(SDO)的太阳射电极紫外(EUV)图像,提出了一种图像质量增强算法。首先,利用引导滤波器对图像进行平滑处理,得到具有良好骨架和边缘的图像;由于太阳射电活动的精细结构嵌入在太阳射电图像的高频成分中,我们提出了一种新的结构保持滤波方法,以放大原始输入图像的不同信号减去平滑信号。然后,将放大后的细节与平滑后的细节融合在一起,生成最终的增强图像。实验结果表明,所提出的图像质量增强算法能显著提高图像质量。
{"title":"Perceptual image quality enhancement for solar radio image","authors":"Long Xu, Lin Ma, Zhuo Chen, Xianyou Zeng, Yihua Yan","doi":"10.1109/QoMEX.2016.7498933","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498933","url":null,"abstract":"In solar radio observation, the visualization of data is very important since it can more intuitively and clearly deliver interest information of solar radio activities to astronomers. As to visualization, we highly expect good visual quality of images/videos in favor of the discovery of solar radio events recorded by observation data. The existing imaging system cannot guarantee good visual quality of solar radio data visualization. In this paper, an image quality enhancement algorithm is developed to improve solar radio extreme ultraviolet (EUV) images from Solar Dynamics Observatory (SDO). Firstly, the guided filter is employed to smooth image, which outputs an image with good skeleton and edges. Since the fine structures of solar radio activities are embedded in high frequency components of a solar radio image, we propose a novel structure preserving filtering to amplify the different signal of original input image subtracting smoothed one. Afterwards, fusing the amplified details and smoothed one together, the final enhanced image is generated. The experimental results prove that the image quality is significantly improved by using the proposed image quality enhancement algorithm.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"11 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91163733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
How to benchmark objective quality metrics from paired comparison data? 如何从成对比较数据中对客观质量指标进行基准测试?
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498960
Philippe Hanhart, Lukáš Krasula, P. Callet, T. Ebrahimi
The procedures commonly used to evaluate the performance of objective quality metrics rely on ground truth mean opinion scores and associated confidence intervals, which are usually obtained via direct scaling methods. However, indirect scaling methods, such as the paired comparison method, can also be used to collect ground truth preference scores. Indirect scaling methods have a higher discriminatory power and are gaining popularity, for example in crowdsourcing evaluations. In this paper, we present how the classification errors, an existing analysis tool, can also be used with subjective preference scores. Additionally, we propose a new analysis tool based on the receiver operating characteristic analysis. This tool can be used to further assess the performance of objective metrics based on ground truth preference scores. We provide a MATLAB script with an implementation of the proposed tools and we show one example of application of the proposed tools.
通常用于评估客观质量指标性能的程序依赖于基础真实值,平均意见得分和相关置信区间,这些通常通过直接缩放方法获得。然而,间接标度方法,如配对比较方法,也可以用来收集基础真相偏好得分。间接缩放方法具有更高的歧视性,并且越来越受欢迎,例如在众包评估中。在本文中,我们提出了分类误差,一个现有的分析工具,也可以与主观偏好分数使用。此外,我们还提出了一种新的基于接收机工作特性分析的分析工具。该工具可用于进一步评估基于真实偏好分数的客观指标的性能。我们提供了一个MATLAB脚本,实现了所提出的工具,并展示了所提出工具的一个应用示例。
{"title":"How to benchmark objective quality metrics from paired comparison data?","authors":"Philippe Hanhart, Lukáš Krasula, P. Callet, T. Ebrahimi","doi":"10.1109/QoMEX.2016.7498960","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498960","url":null,"abstract":"The procedures commonly used to evaluate the performance of objective quality metrics rely on ground truth mean opinion scores and associated confidence intervals, which are usually obtained via direct scaling methods. However, indirect scaling methods, such as the paired comparison method, can also be used to collect ground truth preference scores. Indirect scaling methods have a higher discriminatory power and are gaining popularity, for example in crowdsourcing evaluations. In this paper, we present how the classification errors, an existing analysis tool, can also be used with subjective preference scores. Additionally, we propose a new analysis tool based on the receiver operating characteristic analysis. This tool can be used to further assess the performance of objective metrics based on ground truth preference scores. We provide a MATLAB script with an implementation of the proposed tools and we show one example of application of the proposed tools.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"10 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91240277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Video content analysis method for audiovisual quality assessment 视听质量评价的视频内容分析方法
Pub Date : 2016-06-06 DOI: 10.1109/QoMEX.2016.7498965
Baris Konuk, Emin Zerman, G. Nur, G. Akar
In this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.
本文提出了一种基于时空特征的视频内容分析方法。该方法已在不同的视频质量评估数据库中进行了评估,其中包括不同特征和失真类型的视频。在不同数据库上的测试结果证明了所提出的内容分析方法的稳健性和准确性。此外,为了检验在考虑视频内容的情况下,音像质量评估的性能改进情况,我们还采用了这种分析方法。
{"title":"Video content analysis method for audiovisual quality assessment","authors":"Baris Konuk, Emin Zerman, G. Nur, G. Akar","doi":"10.1109/QoMEX.2016.7498965","DOIUrl":"https://doi.org/10.1109/QoMEX.2016.7498965","url":null,"abstract":"In this study a novel, spatio-temporal characteristics based video content analysis method is presented. The proposed method has been evaluated on different video quality assessment databases, which include videos with different characteristics and distortion types. Test results obtained on different databases demonstrate the robustness and accuracy of the proposed content analysis method. Moreover, this analysis method is employed in order to examine the performance improvement in audiovisual quality assessment when the video content is taken into consideration.","PeriodicalId":6645,"journal":{"name":"2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)","volume":"3 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87412855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1