Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263878
Jin Li, J. D. Cock, P. Lambert, R. Walle
This paper proposes an asymmetric coding scheme for 3D frame-compatible formats. The aim is to improve the coding efficiency while maintaining the same visual performance. In the encoding process, prediction is performed between the reconstructed left samples and the original right samples. At the receiver, the samples of the left view and the residuals of the right view are decoded. However, the right views could be reconstructed in the up-sampling process by combining the reconstructed left samples and the residuals of the right view. Thus, no modification is required to the decoder. It is shown that the proposed method significantly reduces the bitrate compared to the symmetric coding scheme. Although the PSNR of the right view is about 1.5dB lower than the left view on average, the visual quality is considered tolerable due to the properties of the human visual system.
{"title":"Asymmetric coding scheme for 3D frame-compatible formats","authors":"Jin Li, J. D. Cock, P. Lambert, R. Walle","doi":"10.1109/QoMEX.2012.6263878","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263878","url":null,"abstract":"This paper proposes an asymmetric coding scheme for 3D frame-compatible formats. The aim is to improve the coding efficiency while maintaining the same visual performance. In the encoding process, prediction is performed between the reconstructed left samples and the original right samples. At the receiver, the samples of the left view and the residuals of the right view are decoded. However, the right views could be reconstructed in the up-sampling process by combining the reconstructed left samples and the residuals of the right view. Thus, no modification is required to the decoder. It is shown that the proposed method significantly reduces the bitrate compared to the symmetric coding scheme. Although the PSNR of the right view is about 1.5dB lower than the left view on average, the visual quality is considered tolerable due to the properties of the human visual system.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"1 1","pages":"154-155"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88676613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263864
Jordi Puig, A. Perkis, F. Lindseth, T. Ebrahimi
The goal of this paper is to survey existing quality assessment methodologies for Augmented Reality (AR) visualization and to introduce a methodology for subjective quality assessment. Methodologies to assess the quality of AR systems have existed since these technologies appeared. The existing methodologies typically take an approach from the fields they are used in, such as ergonomics, usability, psychophysics or ethnography. Each field utilizes different methods, looking at different aspects of AR quality such as physical limitations, tracking loss or jitter, perceptual issues or feedback issues, just to name a few. AR systems are complex experiences, involving a mix of user interaction, visual perception, audio, haptic or other types of multimodal interactions as well. This paper focuses on the quality assessment of AR visualization, with a special interest on applications for neuronavigation.
{"title":"Towards an efficient methodology for evaluation of quality of experience in Augmented Reality","authors":"Jordi Puig, A. Perkis, F. Lindseth, T. Ebrahimi","doi":"10.1109/QoMEX.2012.6263864","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263864","url":null,"abstract":"The goal of this paper is to survey existing quality assessment methodologies for Augmented Reality (AR) visualization and to introduce a methodology for subjective quality assessment. Methodologies to assess the quality of AR systems have existed since these technologies appeared. The existing methodologies typically take an approach from the fields they are used in, such as ergonomics, usability, psychophysics or ethnography. Each field utilizes different methods, looking at different aspects of AR quality such as physical limitations, tracking loss or jitter, perceptual issues or feedback issues, just to name a few. AR systems are complex experiences, involving a mix of user interaction, visual perception, audio, haptic or other types of multimodal interactions as well. This paper focuses on the quality assessment of AR visualization, with a special interest on applications for neuronavigation.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"25 1","pages":"188-193"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84633258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263861
Olli S. Rummukainen, V. Pulkki
Current perception-based quality metrics for unimodal systems cannot reflect the perceived quality in multimodal situations, and better understanding of multimodal perceptual mechanisms is needed. In this work, audiovisual perception was studied with an immersive audiovisual display. The aim was to observe cross-modal interaction of auditory and visual modalities, when the spatial width of audio and video reproduction were limited and overall perceived degradation evaluated. The results show both audio and video width affect the perceived degradation of a stimulus. The effect of audio width decreases as video width is decreased. Constrained correspondence analysis suggests the reasons for highest perceived degradation to be wrong audio direction, reduced video width and missing essential content.
{"title":"Audiovisual reproduction in surrounding display: Effect of spatial width of audio and video","authors":"Olli S. Rummukainen, V. Pulkki","doi":"10.1109/QoMEX.2012.6263861","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263861","url":null,"abstract":"Current perception-based quality metrics for unimodal systems cannot reflect the perceived quality in multimodal situations, and better understanding of multimodal perceptual mechanisms is needed. In this work, audiovisual perception was studied with an immersive audiovisual display. The aim was to observe cross-modal interaction of auditory and visual modalities, when the spatial width of audio and video reproduction were limited and overall perceived degradation evaluated. The results show both audio and video width affect the perceived degradation of a stimulus. The effect of audio width decreases as video width is decreased. Constrained correspondence analysis suggests the reasons for highest perceived degradation to be wrong audio direction, reduced video width and missing essential content.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"41 1","pages":"127-132"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85261709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263862
Beilian Li, X. Mou
The research on no reference image quality assessment (NR IQA) is the most attractive one in the area of image quality perception. In this paper, we propose to use the statistical distribution of local Sub-Image-Similarity (SIS) measures for NR IQA model design. Here the mean and the difference properties among the local SIS measurements in different directions are synthesized into five quality labels to depict the perceptual quality property of deteriorated images. The proposed NR IQA model is developed based on the statistical distribution of quality labels over whole image, via a SVM regression. Experiments show that the proposed model performs best according to the predictive accuracy when compared to the published NR IQA models, and works stably with different parameter selections and cross database evaluations.
{"title":"No reference image quality assessment based on statistical distribution of local Sub-Image-Similarity","authors":"Beilian Li, X. Mou","doi":"10.1109/QoMEX.2012.6263862","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263862","url":null,"abstract":"The research on no reference image quality assessment (NR IQA) is the most attractive one in the area of image quality perception. In this paper, we propose to use the statistical distribution of local Sub-Image-Similarity (SIS) measures for NR IQA model design. Here the mean and the difference properties among the local SIS measurements in different directions are synthesized into five quality labels to depict the perceptual quality property of deteriorated images. The proposed NR IQA model is developed based on the statistical distribution of quality labels over whole image, via a SVM regression. Experiments show that the proposed model performs best according to the predictive accuracy when compared to the published NR IQA models, and works stably with different parameter selections and cross database evaluations.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"55 1","pages":"176-181"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84497557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-05DOI: 10.1109/QoMEX.2012.6263844
Yohann Pitrey, Romuald Pépion, P. Callet, M. Barkowsky
In this paper, four subjective video datasets are presented. The considered application is Scalable Video Coding used as an error-concealment mechanism. The presented datasets explore the relations between encoding parameters and perceived quality, under different network-impairment patterns and involve error-concealment on the decoder's side, to simulate a complete distribution channel. The datasets share a part of common configurations which enables, in the first part of the paper, to compare the outcomes from several Single Stimulus experiments and draw interesting correspondances between different types of distortion. In the second part of the paper, we analyse the performance of three common objective quality metrics on each step of the distribution channel, to identify the possible directions to be followed in order to improve their accuracy in predicting the perceived quality.
{"title":"Using overlapping subjective datasets to assess the performance of objective quality metrics on Scalable Video Coding and error concealment","authors":"Yohann Pitrey, Romuald Pépion, P. Callet, M. Barkowsky","doi":"10.1109/QoMEX.2012.6263844","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263844","url":null,"abstract":"In this paper, four subjective video datasets are presented. The considered application is Scalable Video Coding used as an error-concealment mechanism. The presented datasets explore the relations between encoding parameters and perceived quality, under different network-impairment patterns and involve error-concealment on the decoder's side, to simulate a complete distribution channel. The datasets share a part of common configurations which enables, in the first part of the paper, to compare the outcomes from several Single Stimulus experiments and draw interesting correspondances between different types of distortion. In the second part of the paper, we analyse the performance of three common objective quality metrics on each step of the distribution channel, to identify the possible directions to be followed in order to improve their accuracy in predicting the perceived quality.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"18 1","pages":"103-108"},"PeriodicalIF":0.0,"publicationDate":"2012-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89199938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-07-01DOI: 10.1109/QoMEX.2012.6263869
Jingteng Xue, Chang Wen Chen
Contemporary users have been viewing video contents via mobile devices from virtually anywhere and at any time. One common experience these mobile users feel is the significant difference in visual perception when viewing context changes, e.g. from indoor to outdoor. Conventional video quality assessment (VQA) defined under ITU-T recommendations outlines a set of evaluation conditions that need to be strictly followed. These conditions, including viewing distance, room illumination, display brightness and chromaticity of background are historically tuned to simulate the living room television viewing scenario. In this paper, we first present a set of contextual factors that are unique for mobile video and are substantially different from conventional living room evaluation conditions. We design and perform a series of subjective tests to evaluate the influence of these contextual factors that frequently encountered in the case of mobile video. These evaluation results show that (1) perceptual quality is highly correlated with the con-text factors; (2) with the presence of the contextual visual interference, the viewers shall spot less video signal distortion and have lower expectations on the quality of mobile video. We then pro-pose a VQA model based on the Just Noticeable Distortion (JND) theory and we prove that this model is able to provide context aware prediction of perceived mobile video quality. Finally, we also discuss its application of this VQA model in the design of video transmission systems.
{"title":"A study on perception of mobile video with surrounding contextual influences","authors":"Jingteng Xue, Chang Wen Chen","doi":"10.1109/QoMEX.2012.6263869","DOIUrl":"https://doi.org/10.1109/QoMEX.2012.6263869","url":null,"abstract":"Contemporary users have been viewing video contents via mobile devices from virtually anywhere and at any time. One common experience these mobile users feel is the significant difference in visual perception when viewing context changes, e.g. from indoor to outdoor. Conventional video quality assessment (VQA) defined under ITU-T recommendations outlines a set of evaluation conditions that need to be strictly followed. These conditions, including viewing distance, room illumination, display brightness and chromaticity of background are historically tuned to simulate the living room television viewing scenario. In this paper, we first present a set of contextual factors that are unique for mobile video and are substantially different from conventional living room evaluation conditions. We design and perform a series of subjective tests to evaluate the influence of these contextual factors that frequently encountered in the case of mobile video. These evaluation results show that (1) perceptual quality is highly correlated with the con-text factors; (2) with the presence of the contextual visual interference, the viewers shall spot less video signal distortion and have lower expectations on the quality of mobile video. We then pro-pose a VQA model based on the Just Noticeable Distortion (JND) theory and we prove that this model is able to provide context aware prediction of perceived mobile video quality. Finally, we also discuss its application of this VQA model in the design of video transmission systems.","PeriodicalId":6303,"journal":{"name":"2012 Fourth International Workshop on Quality of Multimedia Experience","volume":"368 1","pages":"248-253"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86441722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}