Multi-modal Preference Modeling for Product Search

Yangyang Guo, Zhiyong Cheng, Liqiang Nie, Xin-Shun Xu, M. Kankanhalli
{"title":"Multi-modal Preference Modeling for Product Search","authors":"Yangyang Guo, Zhiyong Cheng, Liqiang Nie, Xin-Shun Xu, M. Kankanhalli","doi":"10.1145/3240508.3240541","DOIUrl":null,"url":null,"abstract":"The visual preference of users for products has been largely ignored by the existing product search methods. In this work, we propose a multi-modal personalized product search method, which aims to search products which not only are relevant to the submitted textual query, but also match the user preferences from both textual and visual modalities. To achieve the goal, we first leverage the also_view and buy_after_viewing products to construct the visual and textual latent spaces, which are expected to preserve the visual similarity and semantic similarity of products, respectively. We then propose a translation-based search model (TranSearch ) to 1) learn a multi-modal latent space based on the pre-trained visual and textual latent spaces; and 2) map the users, queries and products into this space for direct matching. The TranSearch model is trained based on a comparative learning strategy, such that the multi-modal latent space is oriented to personalized ranking in the training stage. Experiments have been conducted on real-world datasets to validate the effectiveness of our method. The results demonstrate that our method outperforms the state-of-the-art method by a large margin.","PeriodicalId":339857,"journal":{"name":"Proceedings of the 26th ACM international conference on Multimedia","volume":"130 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"55","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 26th ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3240508.3240541","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 55

Abstract

The visual preference of users for products has been largely ignored by the existing product search methods. In this work, we propose a multi-modal personalized product search method, which aims to search products which not only are relevant to the submitted textual query, but also match the user preferences from both textual and visual modalities. To achieve the goal, we first leverage the also_view and buy_after_viewing products to construct the visual and textual latent spaces, which are expected to preserve the visual similarity and semantic similarity of products, respectively. We then propose a translation-based search model (TranSearch ) to 1) learn a multi-modal latent space based on the pre-trained visual and textual latent spaces; and 2) map the users, queries and products into this space for direct matching. The TranSearch model is trained based on a comparative learning strategy, such that the multi-modal latent space is oriented to personalized ranking in the training stage. Experiments have been conducted on real-world datasets to validate the effectiveness of our method. The results demonstrate that our method outperforms the state-of-the-art method by a large margin.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
产品搜索的多模态偏好建模
现有的产品搜索方法在很大程度上忽略了用户对产品的视觉偏好。在这项工作中,我们提出了一种多模态个性化产品搜索方法,该方法旨在搜索与提交的文本查询相关的产品,并且从文本和视觉两方面匹配用户偏好。为了实现这一目标,我们首先利用产品的also_view和buy_after_viewing来构建视觉潜空间和文本潜空间,分别保持产品的视觉相似性和语义相似性。然后,我们提出了一个基于翻译的搜索模型(TranSearch): 1)学习基于预训练的视觉和文本潜在空间的多模态潜在空间;2)将用户、查询和产品映射到该空间中进行直接匹配。TranSearch模型基于比较学习策略进行训练,使得多模态潜在空间在训练阶段面向个性化排序。实验已经在真实世界的数据集上进行,以验证我们的方法的有效性。结果表明,我们的方法在很大程度上优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
OSMO Session details: Multimodal-2 (Cross-Modal Translation) Pseudo Transfer with Marginalized Corrupted Attribute for Zero-shot Learning Session details: System-2 (Smart Multimedia Systems) ALERT
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1