一种物体变形的分解方法

Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee
{"title":"一种物体变形的分解方法","authors":"Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee","doi":"10.1145/3355088.3365151","DOIUrl":null,"url":null,"abstract":"Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Decomposition Method of Object Transfiguration\",\"authors\":\"Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee\",\"doi\":\"10.1145/3355088.3365151\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.\",\"PeriodicalId\":435930,\"journal\":{\"name\":\"SIGGRAPH Asia 2019 Technical Briefs\",\"volume\":\"97 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGGRAPH Asia 2019 Technical Briefs\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3355088.3365151\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2019 Technical Briefs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3355088.3365151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

现有的基于深度学习的对象变换方法都是基于无监督的图像到图像的转换,具有较好的性能。然而,以前的方法在物体形状发生重大变化的任务中往往失败。此外,原始物体的形状和纹理仍保留在转换后的图像中。为了解决这些问题,我们提出了一种新的方法,将对象转换任务分解为两个子任务:对象移除和对象合成。这可以防止原始对象影响生成的对象,并使生成的对象更适合背景。然后,我们使用实例信息(例如对象分割掩码)明确地制定每个任务来区分背景和对象。与其他方法相比,我们的模型不受原始对象的位置、形状和大小的约束。我们展示了定性和定量比较与其他方法证明了所提出的方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Decomposition Method of Object Transfiguration
Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Faster RPNN: Rendering Clouds with Latent Space Light Probes Flexible Ray Traversal with an Extended Programming Model Augmented Reality Guided Respiratory Liver Tumors Punctures: A Preliminary Feasibility Study Beyond the Screen Embedded Concave Micromirror Array-based See-through Light Field Near-eye Display
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1