{"title":"Analysis of Evaluation Metrics with the Distance between Positive Pairs and Negative Pairs in Deep Metric Learning","authors":"Hajime Oi, Rei Kawakami, T. Naemura","doi":"10.23919/MVA51890.2021.9511393","DOIUrl":null,"url":null,"abstract":"Deep metric learning (DML) acquires embeddings via deep learning, where distances among samples of the same class are shorter than those of different classes. The previous DML studies proposed new metrics to overcome the issues of general metrics, but they have the following two problems; one is that they consider only a small portion of the whole distribution of the data, and the other is that their scores cannot be directly compared among methods when the number of classes is different. To analyze these issues, we consider the histograms of the inner products between arbitrary positive pairs and those of negative pairs. We can evaluate the entire distribution by measuring the distance between the two histograms. By normalizing the histograms by their areas, we can also cancel the effect of the number of classes. In experiments, visualizations of the histograms revealed that the embeddings of the existing DML methods do not generalize well to the validation set. We also confirmed that the evaluation of the distance between the positive and negative histograms is less affected by the variation in the number of classes compared with Recall@1 and MAP@R.","PeriodicalId":312481,"journal":{"name":"2021 17th International Conference on Machine Vision and Applications (MVA)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA51890.2021.9511393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Deep metric learning (DML) acquires embeddings via deep learning, where distances among samples of the same class are shorter than those of different classes. The previous DML studies proposed new metrics to overcome the issues of general metrics, but they have the following two problems; one is that they consider only a small portion of the whole distribution of the data, and the other is that their scores cannot be directly compared among methods when the number of classes is different. To analyze these issues, we consider the histograms of the inner products between arbitrary positive pairs and those of negative pairs. We can evaluate the entire distribution by measuring the distance between the two histograms. By normalizing the histograms by their areas, we can also cancel the effect of the number of classes. In experiments, visualizations of the histograms revealed that the embeddings of the existing DML methods do not generalize well to the validation set. We also confirmed that the evaluation of the distance between the positive and negative histograms is less affected by the variation in the number of classes compared with Recall@1 and MAP@R.