Shilong Feng, H. Xie, Hongbo Yin, Xiaopeng Chen, Deshun Yang, P. Chan
{"title":"Class Size Variance Minimization to Metric Learning for Dish Identification","authors":"Shilong Feng, H. Xie, Hongbo Yin, Xiaopeng Chen, Deshun Yang, P. Chan","doi":"10.1109/ICMLC48188.2019.8949253","DOIUrl":null,"url":null,"abstract":"The objective of metric learning is to search a suitable metric for measuring distance or similarity between samples. Usually, it aims to minimize the distance between samples of same class and maximizes the distance between samples of different classes. However, most metric learning methods do not consider the sizes of classes, which may cause negative impact on the performance in classification since the size of a cluster is usually ignored in the distance comparison. In this work, we propose a triplet loss with variance constraint. Our method focuses not only on the distances between samples but also on the sizes of classes. The size difference between classes is also minimized in our objective function. The experimental results confirm that our method outperforms the one without the class size variance.","PeriodicalId":221349,"journal":{"name":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC48188.2019.8949253","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The objective of metric learning is to search a suitable metric for measuring distance or similarity between samples. Usually, it aims to minimize the distance between samples of same class and maximizes the distance between samples of different classes. However, most metric learning methods do not consider the sizes of classes, which may cause negative impact on the performance in classification since the size of a cluster is usually ignored in the distance comparison. In this work, we propose a triplet loss with variance constraint. Our method focuses not only on the distances between samples but also on the sizes of classes. The size difference between classes is also minimized in our objective function. The experimental results confirm that our method outperforms the one without the class size variance.