Zuohui Chen , Yao Lu , JinXuan Hu , Qi Xuan , Zhen Wang , Xiaoniu Yang
{"title":"Graph-Based Similarity of Deep Neural Networks","authors":"Zuohui Chen , Yao Lu , JinXuan Hu , Qi Xuan , Zhen Wang , Xiaoniu Yang","doi":"10.1016/j.neucom.2024.128722","DOIUrl":null,"url":null,"abstract":"<div><div>Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards tackling this conundrum lies in quantifying the degree of similarity between these representations. Various approaches have been proposed in prior research, however, as the field of representation similarity continues to develop, existing metrics are not compatible with each other and struggling to meet the evolving demands. To address this, we propose a comprehensive similarity measurement framework inspired by the natural graph structure formed by samples and their corresponding features within the neural network. Our novel Graph-Based Similarity (GBS) framework gauges the similarity of DNN representations by constructing a weighted, undirected graph based on the output of hidden layers. In this graph, each node represents an input sample, and the edges are weighted in accordance with the similarity between pairs of nodes. Consequently, the measure of representational similarity can be derived through graph similarity metrics, such as layer similarity. We observe that input samples belonging to the same category exhibit dense interconnections within the deep layers of the DNN. To quantify this phenomenon, we employ a motif-based approach to gauge the extent of these interconnections. This serves as a metric to evaluate whether the representation derived from one model can be accurately classified by another. Experimental results show that GBS gets state-of-the-art performance in the sanity check. We also extensively evaluate GBS on downstream tasks to demonstrate its effectiveness, including measuring the transferability of pretrained models and model pruning.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128722"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014930","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards tackling this conundrum lies in quantifying the degree of similarity between these representations. Various approaches have been proposed in prior research, however, as the field of representation similarity continues to develop, existing metrics are not compatible with each other and struggling to meet the evolving demands. To address this, we propose a comprehensive similarity measurement framework inspired by the natural graph structure formed by samples and their corresponding features within the neural network. Our novel Graph-Based Similarity (GBS) framework gauges the similarity of DNN representations by constructing a weighted, undirected graph based on the output of hidden layers. In this graph, each node represents an input sample, and the edges are weighted in accordance with the similarity between pairs of nodes. Consequently, the measure of representational similarity can be derived through graph similarity metrics, such as layer similarity. We observe that input samples belonging to the same category exhibit dense interconnections within the deep layers of the DNN. To quantify this phenomenon, we employ a motif-based approach to gauge the extent of these interconnections. This serves as a metric to evaluate whether the representation derived from one model can be accurately classified by another. Experimental results show that GBS gets state-of-the-art performance in the sanity check. We also extensively evaluate GBS on downstream tasks to demonstrate its effectiveness, including measuring the transferability of pretrained models and model pruning.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.