Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems

Debapriya Roy, Sanchayan Santra, B. Chanda
{"title":"Incorporating Human Body Shape Guidance for Cloth Warping in Model to Person Virtual Try-on Problems","authors":"Debapriya Roy, Sanchayan Santra, B. Chanda","doi":"10.1109/IVCNZ51579.2020.9290603","DOIUrl":null,"url":null,"abstract":"The world of retail has witnessed a lot of change in the last few decades and with a size of 2.4 trillion, the fashion industry is way ahead of others in this aspect. With the blessings of technology like virtual try-on (vton), now even online shoppers can virtually try a product before buying. However, the current image-based virtual try-on methods still have a long way to go when it comes to producing realistic outputs. In general, vton methods work in two stages. The first stage warps the source cloth and the second stage merges the cloth with the person image for predicting the final try-on output. While the second stage is comparatively easier to handle using neural networks, predicting an accurate warp is difficult as replicating actual human body deformation is challenging. A fundamental issue in vton domain is data. Although lots of images of cloth are available over the internet in either social media or e-commerce websites, but most of them are in the form of a human wearing the cloth. However, the existing approaches are constrained to take separate cloth images as the input source clothing. To address these problems, we propose a model to person cloth warping strategy, where the objective is to align the segmented cloth from the model image in a way that fits the target person, thus, alleviating the need of separate cloth images. Compared to the existing approaches of warping, our method shows improvement especially in the case of complex patterns of cloth. Rigorous experiments applied on various public domain datasets establish the efficacy of this method compared to benchmark methods.","PeriodicalId":164317,"journal":{"name":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","volume":"264 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IVCNZ51579.2020.9290603","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The world of retail has witnessed a lot of change in the last few decades and with a size of 2.4 trillion, the fashion industry is way ahead of others in this aspect. With the blessings of technology like virtual try-on (vton), now even online shoppers can virtually try a product before buying. However, the current image-based virtual try-on methods still have a long way to go when it comes to producing realistic outputs. In general, vton methods work in two stages. The first stage warps the source cloth and the second stage merges the cloth with the person image for predicting the final try-on output. While the second stage is comparatively easier to handle using neural networks, predicting an accurate warp is difficult as replicating actual human body deformation is challenging. A fundamental issue in vton domain is data. Although lots of images of cloth are available over the internet in either social media or e-commerce websites, but most of them are in the form of a human wearing the cloth. However, the existing approaches are constrained to take separate cloth images as the input source clothing. To address these problems, we propose a model to person cloth warping strategy, where the objective is to align the segmented cloth from the model image in a way that fits the target person, thus, alleviating the need of separate cloth images. Compared to the existing approaches of warping, our method shows improvement especially in the case of complex patterns of cloth. Rigorous experiments applied on various public domain datasets establish the efficacy of this method compared to benchmark methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
将模型中布料翘曲的人体形状指导结合到人的虚拟试穿问题中
在过去的几十年里,零售业发生了很多变化,拥有2.4万亿美元规模的时装业在这方面遥遥领先于其他行业。随着虚拟试穿(vton)等技术的发展,现在即使是网上购物者也可以在购买前虚拟试穿一件商品。然而,目前基于图像的虚拟试戴方法在产生逼真输出方面还有很长的路要走。一般来说,vton方法分为两个阶段。第一阶段对布料进行经纱,第二阶段将布料与人物图像合并,以预测最终的试穿输出。虽然第二阶段使用神经网络相对容易处理,但预测准确的翘曲是困难的,因为复制实际的人体变形是具有挑战性的。光子领域的一个基本问题是数据。虽然在社交媒体和电子商务网站上可以看到很多关于布料的图片,但大多数都是人穿着布料的形式。然而,现有的方法都局限于以单独的布料图像作为输入源服装。为了解决这些问题,我们提出了一种模型到人的布料扭曲策略,其目标是将模型图像中的分段布料以适合目标人的方式对齐,从而减少对单独布料图像的需求。与现有的整经方法相比,本方法在织物复杂图案的整经方面有明显的改进。在各种公共领域数据集上进行的严格实验证明了该方法与基准方法相比的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Image and Text fusion for UPMC Food-101 using BERT and CNNs Predicting Cherry Quality Using Siamese Networks Wavelet Based Thresholding for Fourier Ptychography Microscopy Improving the Efficient Neural Architecture Search via Rewarding Modifications A fair comparison of the EEG signal classification methods for alcoholic subject identification
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1