Rino Naka, Marie Katsurai, Keisuke Yanagi, Ryosuke Goto
{"title":"Fashion Style-Aware Embeddings for Clothing Image Retrieval","authors":"Rino Naka, Marie Katsurai, Keisuke Yanagi, Ryosuke Goto","doi":"10.1145/3512527.3531433","DOIUrl":null,"url":null,"abstract":"Clothing image retrieval is becoming increasingly important as users on social media grow to enjoy sharing their daily outfits. Most conventional methods offer single query-based retrieval and depend on visual features learnt via target classification training. This paper presents an embedding learning framework that uses novel style description features available on users' posts, allowing image-based and multiple choice-based queries for practical clothing image retrieval. Specifically, the proposed method exploits the following complementary information for representing fashion styles: season tags, style tags, users' heights, and silhouette descriptions. Then, we learn embeddings based on a quadruplet loss that considers the ranked pairings of the visual features and the proposed style description features, enabling flexible outfit search based on either of these two types of features as queries. Experiments conducted on WEAR posts demonstrated the effectiveness of the proposed method compared with several baseline methods.","PeriodicalId":179895,"journal":{"name":"Proceedings of the 2022 International Conference on Multimedia Retrieval","volume":"112 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3512527.3531433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Clothing image retrieval is becoming increasingly important as users on social media grow to enjoy sharing their daily outfits. Most conventional methods offer single query-based retrieval and depend on visual features learnt via target classification training. This paper presents an embedding learning framework that uses novel style description features available on users' posts, allowing image-based and multiple choice-based queries for practical clothing image retrieval. Specifically, the proposed method exploits the following complementary information for representing fashion styles: season tags, style tags, users' heights, and silhouette descriptions. Then, we learn embeddings based on a quadruplet loss that considers the ranked pairings of the visual features and the proposed style description features, enabling flexible outfit search based on either of these two types of features as queries. Experiments conducted on WEAR posts demonstrated the effectiveness of the proposed method compared with several baseline methods.