WAD-CMSN: Wasserstein Distance based Cross-Modal Semantic Network for Zero-Shot Sketch-Based Image Retrieval

Guanglong Xu, Zhensheng Hu, Jia Cai
{"title":"WAD-CMSN: Wasserstein Distance based Cross-Modal Semantic Network for Zero-Shot Sketch-Based Image Retrieval","authors":"Guanglong Xu, Zhensheng Hu, Jia Cai","doi":"10.1142/s0219691322500540","DOIUrl":null,"url":null,"abstract":"Zero-shot sketch-based image retrieval (ZSSBIR), as a popular studied branch of computer vision, attracts wide attention recently. Unlike sketch-based image retrieval (SBIR), the main aim of ZSSBIR is to retrieve natural images given free hand-drawn sketches that may not appear during training. Previous approaches used semantic aligned sketch-image pairs or utilized memory expensive fusion layer for projecting the visual information to a low dimensional subspace, which ignores the significant heterogeneous cross-domain discrepancy between highly abstract sketch and relevant image. This may yield poor performance in the training phase. To tackle this issue and overcome this drawback, we propose a Wasserstein distance based cross-modal semantic network (WAD-CMSN) for ZSSBIR. Specifically, it first projects the visual information of each branch (sketch, image) to a common low dimensional semantic subspace via Wasserstein distance in an adversarial training manner. Furthermore, identity matching loss is employed to select useful features, which can not only capture complete semantic knowledge, but also alleviate the over-fitting phenomenon caused by the WAD-CMSN model. Experimental results on the challenging Sketchy (Extended) and TU-Berlin (Extended) datasets indicate the effectiveness of the proposed WAD-CMSN model over several competitors.","PeriodicalId":158567,"journal":{"name":"Int. J. Wavelets Multiresolution Inf. Process.","volume":"1103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Wavelets Multiresolution Inf. Process.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0219691322500540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Zero-shot sketch-based image retrieval (ZSSBIR), as a popular studied branch of computer vision, attracts wide attention recently. Unlike sketch-based image retrieval (SBIR), the main aim of ZSSBIR is to retrieve natural images given free hand-drawn sketches that may not appear during training. Previous approaches used semantic aligned sketch-image pairs or utilized memory expensive fusion layer for projecting the visual information to a low dimensional subspace, which ignores the significant heterogeneous cross-domain discrepancy between highly abstract sketch and relevant image. This may yield poor performance in the training phase. To tackle this issue and overcome this drawback, we propose a Wasserstein distance based cross-modal semantic network (WAD-CMSN) for ZSSBIR. Specifically, it first projects the visual information of each branch (sketch, image) to a common low dimensional semantic subspace via Wasserstein distance in an adversarial training manner. Furthermore, identity matching loss is employed to select useful features, which can not only capture complete semantic knowledge, but also alleviate the over-fitting phenomenon caused by the WAD-CMSN model. Experimental results on the challenging Sketchy (Extended) and TU-Berlin (Extended) datasets indicate the effectiveness of the proposed WAD-CMSN model over several competitors.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
WAD-CMSN:基于Wasserstein距离的跨模态语义网络,用于基于零拍摄草图的图像检索
基于零镜头草图的图像检索(Zero-shot sketch-based image retrieval, ZSSBIR)作为计算机视觉领域的一个热门研究分支,近年来受到了广泛的关注。与基于草图的图像检索(SBIR)不同,ZSSBIR的主要目的是检索在训练期间可能不会出现的免费手绘草图的自然图像。以前的方法采用语义对齐的草图-图像对或利用内存昂贵的融合层将视觉信息投射到低维子空间,忽略了高度抽象的草图和相关图像之间显著的异构跨域差异。这可能会在训练阶段产生糟糕的表现。为了解决这一问题并克服这一缺点,我们提出了一种基于Wasserstein距离的跨模态语义网络(WAD-CMSN)。具体来说,它首先以对抗训练的方式,通过Wasserstein距离将每个分支(草图、图像)的视觉信息投影到一个共同的低维语义子空间。利用身份匹配损失来选择有用的特征,既能捕获完整的语义知识,又能缓解WAD-CMSN模型带来的过拟合现象。在具有挑战性的Sketchy(扩展)和TU-Berlin(扩展)数据集上的实验结果表明,所提出的WAD-CMSN模型比几个竞争对手有效。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An aggressive reduction on the complexity of optimization for non-strongly convex objectives A single image super resolution method based on cross residual network and wavelet transform On the computation of extremal trees of Harmonic index with given edge-vertex domination number Phase retrieval from short-time fractional Fourier measurements using alternating direction method of multipliers k-Ambiguity function in the framework of offset linear canonical transform
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1