Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee
{"title":"A Decomposition Method of Object Transfiguration","authors":"Seung Joon Lee, Keon-Woo Kang, Suk-ju Kang, Siyeong Lee","doi":"10.1145/3355088.3365151","DOIUrl":null,"url":null,"abstract":"Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.","PeriodicalId":435930,"journal":{"name":"SIGGRAPH Asia 2019 Technical Briefs","volume":"97 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2019 Technical Briefs","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3355088.3365151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Existing deep learning-based object transfiguration methods are based on unsupervised image-to-image translation which shows reasonable performance. However, previous methods often fail in tasks where the shape of an object changes significantly. In addition, the shape and texture of an original object remain in the converted image. To address these issues, we propose a novel method that decomposes an object transfiguration task into two subtasks: object removal and object synthesis. This prevents an original object from affecting a generated object and makes the generated object better suited to the background. Then, we explicitly formulate each task distinguishing a background and an object using instance information (e.g. object segmentation masks). Our model is unconstrained by position, shape, and size of an original object compared to other methods. We show qualitative and quantitative comparisons with other methods demonstrating the effectiveness of the proposed method.