{"title":"基于方差的Patch选择在无参考图像质量评估中的作用","authors":"S. F. Hosseini-Benvidi, Azadeh Mansouri","doi":"10.1109/IPRIA59240.2023.10147195","DOIUrl":null,"url":null,"abstract":"The objective of the No-Reference Image Quality Assessment (NR-IQA) is to evaluate the perceived image quality subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning based IQA methods work with tiny image patches and assess the quality of the entire image based on the average scores of patches. Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural details from a scene, we analyzed the effect of feeding informative and structural patches to the quality framework. In this paper, a method for structural patch extraction is presented, which is based on the variance values of each patch. The obtained results show that the presented method has an acceptable improvement compared to the random patch selection. The proposed model has also performed well in cross-dataset experiments on common distortions, indicating the model's high generalizability. Additionally, the test was run on the flipped images, and the outcomes are satisfactory.","PeriodicalId":109390,"journal":{"name":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Effect of Variance-Based Patch Selection on No-Reference Image Quality Assessment\",\"authors\":\"S. F. Hosseini-Benvidi, Azadeh Mansouri\",\"doi\":\"10.1109/IPRIA59240.2023.10147195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The objective of the No-Reference Image Quality Assessment (NR-IQA) is to evaluate the perceived image quality subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning based IQA methods work with tiny image patches and assess the quality of the entire image based on the average scores of patches. Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural details from a scene, we analyzed the effect of feeding informative and structural patches to the quality framework. In this paper, a method for structural patch extraction is presented, which is based on the variance values of each patch. The obtained results show that the presented method has an acceptable improvement compared to the random patch selection. The proposed model has also performed well in cross-dataset experiments on common distortions, indicating the model's high generalizability. Additionally, the test was run on the flipped images, and the outcomes are satisfactory.\",\"PeriodicalId\":109390,\"journal\":{\"name\":\"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IPRIA59240.2023.10147195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 6th International Conference on Pattern Recognition and Image Analysis (IPRIA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPRIA59240.2023.10147195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Effect of Variance-Based Patch Selection on No-Reference Image Quality Assessment
The objective of the No-Reference Image Quality Assessment (NR-IQA) is to evaluate the perceived image quality subjectively. Since there is no reference image, this is a challenging and unresolved issue. Convolutional neural networks (CNNs) have gained popularity in recent years and have outperformed many traditional techniques in the field of image processing. In order to overcome overfitting, a large percentage of deep learning based IQA methods work with tiny image patches and assess the quality of the entire image based on the average scores of patches. Patch extraction is one of the most crucial elements of CNN-based methods in quality assessment problems. Assuming that visual perception in humans is well suited to extract structural details from a scene, we analyzed the effect of feeding informative and structural patches to the quality framework. In this paper, a method for structural patch extraction is presented, which is based on the variance values of each patch. The obtained results show that the presented method has an acceptable improvement compared to the random patch selection. The proposed model has also performed well in cross-dataset experiments on common distortions, indicating the model's high generalizability. Additionally, the test was run on the flipped images, and the outcomes are satisfactory.