{"title":"使用距离加倍方差的无参考图像质量度量","authors":"Long Bao, K. Panetta, S. Agaian","doi":"10.1109/TePRA.2015.7219659","DOIUrl":null,"url":null,"abstract":"Image quality assessment becomes essential for autonomous systems, where processing occurs on an acquired image and is then used for detection and recognition of objects. Images exhibiting low quality and captured in the presence of noise that are used as the basis for image recognition systems can dramatically impair the overall recognition system's performance. In this paper, we will present a new distance double variance color image quality measure that does not require a reference image in order to make its evaluation of the quality of an image. The Distance Doubling Variance measure differs from existing color image quality methods, which typically attempt to extend traditional grayscale image approaches for color images. Here, we utilize the color properties in the color space, where we evaluate the difference between two color pixels by computing the distance in the color space using different weights for each of the color components. Based on this distance, we calculate the double variance of the distance matrix. This matrix consists of the maximum distance of each pixel and its corresponding neighboring pixels. To demonstrate its performance, we use the TID-2013 database, which includes 24 different types of distortions for different kinds of images. The simulations are compared with state-of-the-art methods to show the new method has high agreement with human's visual system in many types of distortions.","PeriodicalId":325788,"journal":{"name":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"A no reference image quality measure using a distance doubling variance\",\"authors\":\"Long Bao, K. Panetta, S. Agaian\",\"doi\":\"10.1109/TePRA.2015.7219659\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Image quality assessment becomes essential for autonomous systems, where processing occurs on an acquired image and is then used for detection and recognition of objects. Images exhibiting low quality and captured in the presence of noise that are used as the basis for image recognition systems can dramatically impair the overall recognition system's performance. In this paper, we will present a new distance double variance color image quality measure that does not require a reference image in order to make its evaluation of the quality of an image. The Distance Doubling Variance measure differs from existing color image quality methods, which typically attempt to extend traditional grayscale image approaches for color images. Here, we utilize the color properties in the color space, where we evaluate the difference between two color pixels by computing the distance in the color space using different weights for each of the color components. Based on this distance, we calculate the double variance of the distance matrix. This matrix consists of the maximum distance of each pixel and its corresponding neighboring pixels. To demonstrate its performance, we use the TID-2013 database, which includes 24 different types of distortions for different kinds of images. The simulations are compared with state-of-the-art methods to show the new method has high agreement with human's visual system in many types of distortions.\",\"PeriodicalId\":325788,\"journal\":{\"name\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-05-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TePRA.2015.7219659\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Conference on Technologies for Practical Robot Applications (TePRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TePRA.2015.7219659","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A no reference image quality measure using a distance doubling variance
Image quality assessment becomes essential for autonomous systems, where processing occurs on an acquired image and is then used for detection and recognition of objects. Images exhibiting low quality and captured in the presence of noise that are used as the basis for image recognition systems can dramatically impair the overall recognition system's performance. In this paper, we will present a new distance double variance color image quality measure that does not require a reference image in order to make its evaluation of the quality of an image. The Distance Doubling Variance measure differs from existing color image quality methods, which typically attempt to extend traditional grayscale image approaches for color images. Here, we utilize the color properties in the color space, where we evaluate the difference between two color pixels by computing the distance in the color space using different weights for each of the color components. Based on this distance, we calculate the double variance of the distance matrix. This matrix consists of the maximum distance of each pixel and its corresponding neighboring pixels. To demonstrate its performance, we use the TID-2013 database, which includes 24 different types of distortions for different kinds of images. The simulations are compared with state-of-the-art methods to show the new method has high agreement with human's visual system in many types of distortions.