{"title":"Binocular rivalry-based stereoscopic images quality assessment relevant to its asymmetric and distorted contexts","authors":"Tang Yiling, Jiang Shunliang, Xu Shaoping, Xiao Jian, Chen Xiaojun","doi":"10.11834/jig.220309","DOIUrl":null,"url":null,"abstract":"目的 现有方法存在特征提取时间过长、非对称失真图像预测准确性不高的问题,同时少有工作对非对称失真与对称失真立体图像的分类进行研究,为此提出了基于双目竞争的非对称失真立体图像质量评价方法。方法 依据双目竞争的视觉现象,利用非对称失真立体图像两个视点的图像质量衰减程度的不同,生成单目图像特征的融合系数,融合从左右视点图像中提取的灰度空间特征与HSV (hue-saturation-value)彩色空间特征。同时,量化两个视点图像在结构、信息量和质量衰减程度等多方面的差异,获得双目差异特征。并且将双目融合特征与双目差异特征级联为一个描述能力更强的立体图像质量感知特征向量,训练基于支持向量回归的特征—质量映射模型。此外,还利用双目差异特征训练基于支持向量分类模型的对称失真与非对称失真立体图像分类模型。结果 本文提出的质量预测模型在4个数据库上的SROCC (Spearman rank order correlation coefficient)和PLCC (Pearson linear correlation coefficient)均达到0.95以上,在3个非对称失真数据库上的均方根误差(root of mean square error,RMSE)取值均优于对比算法。在LIVE-II(LIVE 3D image quality database phase II)、IVC-I(Waterloo-IVC 3D image qualityassessment database phase I)和IVC-II (Waterloo-IVC 3D image quality assessment database phase II)这3个非对称失真立体图像测试数据库上的失真类型分类测试中,对称失真立体图像的分类准确率分别为89.91%、94.76%和98.97%,非对称失真立体图像的分类准确率分别为95.46%,92.64%和96.22%。结论 本文方法依据双目竞争的视觉现象融合左右视点图像的质量感知特征用于立体图像质量预测,能够提升非对称失真立体图像的评价准确性和鲁棒性。所提取双目差异性特征还能够用于将对称失真与非对称失真立体图像进行有效分类,分类准确性高。;Objective Computer vision-related stereoscopic image quality assessment(SIQA) is focused on recently. It is essential for parameter setting and system optimizing for such domains of multiple stereoscopic image applications like image storage,compression,transmission,and display. Stereoscopic images can be segmented into two sorts of distorted images:symmetrically and asymmetrically distorted,in terms of the degree of degradation between the left and right views. For symmetric-based distorted stereoscopic images,the distortion type and degree occurred in the left and right views are basically in consistency. Early SIQA methods were effective in evaluating symmetrically distorted images by averaging scores or features derived from the two views. However,in practice,the stereoscopic images are often asymmetrically distorted,where the distortion type and level of the two views are different. Simply averaging the quality values of the two views cannot accurately simulate the binocular fusion process and the binocular rivalry phenomena in relevance to the human visual system. Consequently,the evaluation accuracy of these methods will be down to severe lower when the quality of asymmetrically distorted stereoscopic images is estimated. Previous studies have shown that when the left and right views of a stereoscopic image exhibit varying levels or types of distortion,binocular rivalry is primarily driven by one of the views. Specially,in the process of evaluating the quality of a stereoscopic image,the visual quality of one view has a greater impact on the stereopair quality evaluation than the other view. To address this issue,some methods have simulated the binocular rivalry phenomenon in human visual system,and used a weighted average method to fuse the visual information in the two views of stereo-pairs as well. However,existing methods are still challenged for its lower prediction accuracy of asymmetrically distorted images,and its feature extraction process is also time-consuming. To optimize the evaluation accuracy of asymmetrically distorted images,we develop a binocular rivalry-based no-reference SIQA method. Method Multiple information-contained is used to generate image quality degradation coefficients in the two views,which can describe the degradation level of the distorted images accurately. According to the binocular rivalry phenomena in human visual system,the image quality degradation coefficients are used to generate fusion coefficients,which can be used to fuse the views-derived monocular features,including gray-scale features and HSV color space-extracted statistics. Since the human visual system is sensitive to structural information,the binocular structural similarity map(BSSIM) is constructed to measure the structural difference between the left and right views. As one part of the binocular difference features,structural difference features are extracted from the BSSIM. To quantify the differences between the left and right views,other related binocular difference features like entropy difference and degradation difference are obtained further. Finally,the binocular fusion features and the binocular difference features are concatenated into a more descriptive quality-aware feature vector,and a support vector regression model is trained to map the feature vector to the perception quality. In addition,to classify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images,a support vector classification model is also trained using the binocular difference features. Result To verify the performance of the proposed SIQA method,4 sorts of publicly benchmark stereoscopic image databases are employed in relevance to the symmetrically and asymmetrically distorted stereoscopic images-involved LIVE 3D IQA Database Phase II(LIVE-II), Waterloo-IVC 3D IQA Database Phase I(IVC-I),and Waterloo-IVC 3D IQA Database Phase II(IVC-II). Symmetrically distorted stereoscopic images are only involved in the LIVE 3D IQA Database Phase I(LIVE-I). Comparative analysis is carried out in related to 10 state-of-the-art SIQA metrics. To measure the performance,three kinds of commonly-used performance indicators are involved in,including Spearman rank ordered correlation coefficient(SROCC),Pearson linear correlation coefficient(PLCC),and the root-mean-squared error(RMSE). The experimental results demonstrate that the SROCCs and the PLCCs(higher is better) of the proposed method are higher than 0. 95. Furthermore,the RMSEs(lower is better) of the proposed method can be reached to a potential lower degree. Additionally,the proposed classifier is tested on LIVE-II,IVC-I,and IVC-II databases. For LIVE-II database,95. 46% of asymmetrically distorted stereoscopic images can be classified accurately. For IVC-I and IVC-II databases,each of classification accuracy of symmetrically distorted images can be reached to 94. 76% and 98. 97%,and each of the classification accuracy of asymmetrically distorted images can be reached to 92. 64% and 96. 22% as well. Conclusion The degradation level can be quantified for the two views of asymmetrically distorted stereoscopic images. The image quality degradation coefficients are employed to fuse the monocular features,and it is beneficial to develop a more descriptive binocular perception feature vector and an improved prediction accuracy and robustness of asymmetrically distorted stereoscopic images. The proposed classifier can be used to clarify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images as well.","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"55 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"中国图象图形学报","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11834/jig.220309","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
目的 现有方法存在特征提取时间过长、非对称失真图像预测准确性不高的问题,同时少有工作对非对称失真与对称失真立体图像的分类进行研究,为此提出了基于双目竞争的非对称失真立体图像质量评价方法。方法 依据双目竞争的视觉现象,利用非对称失真立体图像两个视点的图像质量衰减程度的不同,生成单目图像特征的融合系数,融合从左右视点图像中提取的灰度空间特征与HSV (hue-saturation-value)彩色空间特征。同时,量化两个视点图像在结构、信息量和质量衰减程度等多方面的差异,获得双目差异特征。并且将双目融合特征与双目差异特征级联为一个描述能力更强的立体图像质量感知特征向量,训练基于支持向量回归的特征—质量映射模型。此外,还利用双目差异特征训练基于支持向量分类模型的对称失真与非对称失真立体图像分类模型。结果 本文提出的质量预测模型在4个数据库上的SROCC (Spearman rank order correlation coefficient)和PLCC (Pearson linear correlation coefficient)均达到0.95以上,在3个非对称失真数据库上的均方根误差(root of mean square error,RMSE)取值均优于对比算法。在LIVE-II(LIVE 3D image quality database phase II)、IVC-I(Waterloo-IVC 3D image qualityassessment database phase I)和IVC-II (Waterloo-IVC 3D image quality assessment database phase II)这3个非对称失真立体图像测试数据库上的失真类型分类测试中,对称失真立体图像的分类准确率分别为89.91%、94.76%和98.97%,非对称失真立体图像的分类准确率分别为95.46%,92.64%和96.22%。结论 本文方法依据双目竞争的视觉现象融合左右视点图像的质量感知特征用于立体图像质量预测,能够提升非对称失真立体图像的评价准确性和鲁棒性。所提取双目差异性特征还能够用于将对称失真与非对称失真立体图像进行有效分类,分类准确性高。;Objective Computer vision-related stereoscopic image quality assessment(SIQA) is focused on recently. It is essential for parameter setting and system optimizing for such domains of multiple stereoscopic image applications like image storage,compression,transmission,and display. Stereoscopic images can be segmented into two sorts of distorted images:symmetrically and asymmetrically distorted,in terms of the degree of degradation between the left and right views. For symmetric-based distorted stereoscopic images,the distortion type and degree occurred in the left and right views are basically in consistency. Early SIQA methods were effective in evaluating symmetrically distorted images by averaging scores or features derived from the two views. However,in practice,the stereoscopic images are often asymmetrically distorted,where the distortion type and level of the two views are different. Simply averaging the quality values of the two views cannot accurately simulate the binocular fusion process and the binocular rivalry phenomena in relevance to the human visual system. Consequently,the evaluation accuracy of these methods will be down to severe lower when the quality of asymmetrically distorted stereoscopic images is estimated. Previous studies have shown that when the left and right views of a stereoscopic image exhibit varying levels or types of distortion,binocular rivalry is primarily driven by one of the views. Specially,in the process of evaluating the quality of a stereoscopic image,the visual quality of one view has a greater impact on the stereopair quality evaluation than the other view. To address this issue,some methods have simulated the binocular rivalry phenomenon in human visual system,and used a weighted average method to fuse the visual information in the two views of stereo-pairs as well. However,existing methods are still challenged for its lower prediction accuracy of asymmetrically distorted images,and its feature extraction process is also time-consuming. To optimize the evaluation accuracy of asymmetrically distorted images,we develop a binocular rivalry-based no-reference SIQA method. Method Multiple information-contained is used to generate image quality degradation coefficients in the two views,which can describe the degradation level of the distorted images accurately. According to the binocular rivalry phenomena in human visual system,the image quality degradation coefficients are used to generate fusion coefficients,which can be used to fuse the views-derived monocular features,including gray-scale features and HSV color space-extracted statistics. Since the human visual system is sensitive to structural information,the binocular structural similarity map(BSSIM) is constructed to measure the structural difference between the left and right views. As one part of the binocular difference features,structural difference features are extracted from the BSSIM. To quantify the differences between the left and right views,other related binocular difference features like entropy difference and degradation difference are obtained further. Finally,the binocular fusion features and the binocular difference features are concatenated into a more descriptive quality-aware feature vector,and a support vector regression model is trained to map the feature vector to the perception quality. In addition,to classify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images,a support vector classification model is also trained using the binocular difference features. Result To verify the performance of the proposed SIQA method,4 sorts of publicly benchmark stereoscopic image databases are employed in relevance to the symmetrically and asymmetrically distorted stereoscopic images-involved LIVE 3D IQA Database Phase II(LIVE-II), Waterloo-IVC 3D IQA Database Phase I(IVC-I),and Waterloo-IVC 3D IQA Database Phase II(IVC-II). Symmetrically distorted stereoscopic images are only involved in the LIVE 3D IQA Database Phase I(LIVE-I). Comparative analysis is carried out in related to 10 state-of-the-art SIQA metrics. To measure the performance,three kinds of commonly-used performance indicators are involved in,including Spearman rank ordered correlation coefficient(SROCC),Pearson linear correlation coefficient(PLCC),and the root-mean-squared error(RMSE). The experimental results demonstrate that the SROCCs and the PLCCs(higher is better) of the proposed method are higher than 0. 95. Furthermore,the RMSEs(lower is better) of the proposed method can be reached to a potential lower degree. Additionally,the proposed classifier is tested on LIVE-II,IVC-I,and IVC-II databases. For LIVE-II database,95. 46% of asymmetrically distorted stereoscopic images can be classified accurately. For IVC-I and IVC-II databases,each of classification accuracy of symmetrically distorted images can be reached to 94. 76% and 98. 97%,and each of the classification accuracy of asymmetrically distorted images can be reached to 92. 64% and 96. 22% as well. Conclusion The degradation level can be quantified for the two views of asymmetrically distorted stereoscopic images. The image quality degradation coefficients are employed to fuse the monocular features,and it is beneficial to develop a more descriptive binocular perception feature vector and an improved prediction accuracy and robustness of asymmetrically distorted stereoscopic images. The proposed classifier can be used to clarify the symmetrically distorted stereoscopic images and the asymmetrically distorted stereoscopic images as well.
中国图象图形学报Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
1.20
自引率
0.00%
发文量
6776
期刊介绍:
Journal of Image and Graphics (ISSN 1006-8961, CN 11-3758/TB, CODEN ZTTXFZ) is an authoritative academic journal supervised by the Chinese Academy of Sciences and co-sponsored by the Institute of Space and Astronautical Information Innovation of the Chinese Academy of Sciences (ISIAS), the Chinese Society of Image and Graphics (CSIG), and the Beijing Institute of Applied Physics and Computational Mathematics (BIAPM). The journal integrates high-tech theories, technical methods and industrialisation of applied research results in computer image graphics, and mainly publishes innovative and high-level scientific research papers on basic and applied research in image graphics science and its closely related fields. The form of papers includes reviews, technical reports, project progress, academic news, new technology reviews, new product introduction and industrialisation research. The content covers a wide range of fields such as image analysis and recognition, image understanding and computer vision, computer graphics, virtual reality and augmented reality, system simulation, animation, etc., and theme columns are opened according to the research hotspots and cutting-edge topics.
Journal of Image and Graphics reaches a wide range of readers, including scientific and technical personnel, enterprise supervisors, and postgraduates and college students of colleges and universities engaged in the fields of national defence, military, aviation, aerospace, communications, electronics, automotive, agriculture, meteorology, environmental protection, remote sensing, mapping, oil field, construction, transportation, finance, telecommunications, education, medical care, film and television, and art.
Journal of Image and Graphics is included in many important domestic and international scientific literature database systems, including EBSCO database in the United States, JST database in Japan, Scopus database in the Netherlands, China Science and Technology Thesis Statistics and Analysis (Annual Research Report), China Science Citation Database (CSCD), China Academic Journal Network Publishing Database (CAJD), and China Academic Journal Network Publishing Database (CAJD). China Science Citation Database (CSCD), China Academic Journals Network Publishing Database (CAJD), China Academic Journal Abstracts, Chinese Science Abstracts (Series A), China Electronic Science Abstracts, Chinese Core Journals Abstracts, Chinese Academic Journals on CD-ROM, and China Academic Journals Comprehensive Evaluation Database.