{"title":"Different triplet sampling techniques for lossless triplet loss on metric similarity learning","authors":"Gábor Kertész","doi":"10.1109/SAMI50585.2021.9378628","DOIUrl":null,"url":null,"abstract":"Metric embedding learning is a special form of supervised learning: instead of regression or classification a similarity value is predicted based on embedded vector distance. To implement such a behavior, first the Siamese architecture was introduced, where training is based on two input samples, and the transformation model seeks to minimize distance between same-category samples, and increase distance between different samples. To deal with the problem of overtraining, the triplet loss was introduced in 2015, considering three input samples at a training step. Triplet networks also highlighted a novel problem: sample selection is important to eliminate those training triplets, where the measured distance based similarity results in zero loss. To deal with this phenomena, triplet mining techniques are analyzed, while other researchers discussed the possibility of different triplet-based loss functions. In this paper, the so-called lossless triplet loss function is compared with the original triplet loss method, while applying different negative sampling methods.","PeriodicalId":402414,"journal":{"name":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAMI50585.2021.9378628","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Metric embedding learning is a special form of supervised learning: instead of regression or classification a similarity value is predicted based on embedded vector distance. To implement such a behavior, first the Siamese architecture was introduced, where training is based on two input samples, and the transformation model seeks to minimize distance between same-category samples, and increase distance between different samples. To deal with the problem of overtraining, the triplet loss was introduced in 2015, considering three input samples at a training step. Triplet networks also highlighted a novel problem: sample selection is important to eliminate those training triplets, where the measured distance based similarity results in zero loss. To deal with this phenomena, triplet mining techniques are analyzed, while other researchers discussed the possibility of different triplet-based loss functions. In this paper, the so-called lossless triplet loss function is compared with the original triplet loss method, while applying different negative sampling methods.