Dissecting the effectiveness of deep features as metric of perceptual image quality

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-05-01 Epub Date: 2025-01-27 DOI:10.1016/j.neunet.2025.107189
Pablo Hernández-Cámara, Jorge Vila-Tomás, Valero Laparra, Jesús Malo
{"title":"Dissecting the effectiveness of deep features as metric of perceptual image quality","authors":"Pablo Hernández-Cámara,&nbsp;Jorge Vila-Tomás,&nbsp;Valero Laparra,&nbsp;Jesús Malo","doi":"10.1016/j.neunet.2025.107189","DOIUrl":null,"url":null,"abstract":"<div><div>There is an open debate on the role of artificial networks to understand the visual brain. Internal representations of images in artificial networks develop human-like properties. In particular, evaluating distortions using differences between internal features is correlated to human perception of distortion. However, the origins of this correlation are not well understood.</div><div>Here, we dissect the different factors involved in the emergence of human-like behavior: <em>function</em>, <em>architecture</em>, and <em>environment</em>. To do so, we evaluate the aforementioned human-network correlation at different depths of 46 pre-trained model configurations that include no psycho-visual information. The results show that most of the models correlate better with human opinion than SSIM (a de-facto standard in subjective image quality). Moreover, some models are better than state-of-the-art networks specifically tuned for the application (LPIPS, DISTS). Regarding the function, supervised classification leads to nets that correlate better with humans than the explored models for self- and non-supervised tasks. However, we found that better performance in the task does not imply more human behavior. Regarding the architecture, simpler models correlate better with humans than very deep nets and generally, the highest correlation is not achieved in the last layer. Finally, regarding the environment, training with large natural datasets leads to bigger correlations than training in smaller databases with restricted content, as expected. We also found that the best classification models are not the best for predicting human distances.</div><div>In the general debate about understanding human vision, our empirical findings imply that explanations have not to be focused on a single abstraction level, but all <em>function</em>, <em>architecture</em>, and <em>environment</em> are relevant.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107189"},"PeriodicalIF":6.3000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025000681","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/27 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

There is an open debate on the role of artificial networks to understand the visual brain. Internal representations of images in artificial networks develop human-like properties. In particular, evaluating distortions using differences between internal features is correlated to human perception of distortion. However, the origins of this correlation are not well understood.
Here, we dissect the different factors involved in the emergence of human-like behavior: function, architecture, and environment. To do so, we evaluate the aforementioned human-network correlation at different depths of 46 pre-trained model configurations that include no psycho-visual information. The results show that most of the models correlate better with human opinion than SSIM (a de-facto standard in subjective image quality). Moreover, some models are better than state-of-the-art networks specifically tuned for the application (LPIPS, DISTS). Regarding the function, supervised classification leads to nets that correlate better with humans than the explored models for self- and non-supervised tasks. However, we found that better performance in the task does not imply more human behavior. Regarding the architecture, simpler models correlate better with humans than very deep nets and generally, the highest correlation is not achieved in the last layer. Finally, regarding the environment, training with large natural datasets leads to bigger correlations than training in smaller databases with restricted content, as expected. We also found that the best classification models are not the best for predicting human distances.
In the general debate about understanding human vision, our empirical findings imply that explanations have not to be focused on a single abstraction level, but all function, architecture, and environment are relevant.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
剖析深度特征作为感知图像质量度量的有效性。
关于人工网络在理解视觉大脑中的作用,存在着公开的争论。人工网络中图像的内部表示发展出类似人类的特性。特别是,利用内部特征之间的差异来评估扭曲与人类对扭曲的感知有关。然而,这种相关性的起源还没有得到很好的理解。在这里,我们剖析了与类人行为出现相关的不同因素:功能、建筑和环境。为此,我们在46种预训练模型配置的不同深度评估了上述人际网络的相关性,这些模型配置不包括心理视觉信息。结果表明,与SSIM(主观图像质量的事实标准)相比,大多数模型与人类观点的相关性更好。此外,一些模型比专门为应用程序调优的最先进的网络(LPIPS, dist)要好。在功能方面,与探索的自监督和非监督任务模型相比,监督分类导致网络与人类更好地相关。然而,我们发现在任务中更好的表现并不意味着更多的人类行为。关于架构,简单的模型比非常深的网络更好地与人相关,通常,最高的相关性不是在最后一层实现的。最后,就环境而言,与在内容受限的小型数据库中训练相比,使用大型自然数据集进行训练会产生更大的相关性,正如预期的那样。我们还发现,最好的分类模型并不是预测人类距离的最佳模型。在关于理解人类视觉的普遍争论中,我们的实证研究结果表明,解释不应该集中在单一的抽象层面上,而是所有的功能、架构和环境都是相关的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Minimizing command timing variability is a key factor in skilled actions Inferring gene regulatory networks via adversarially regularized directed graph autoencoder A continual learning framework with long-term and multiple short-term memory networks A Multi-Agent Continual reinforcement learning framework with multi-Timescale replay and dynamic task classification A survey of recent advances in adversarial attack and defense on vision-language models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1