{"title":"Adversarially-learned Image Transfer Model for Multi-content Disentanglement","authors":"H. Seo, Jee-Hyong Lee","doi":"10.1145/3400286.3418250","DOIUrl":null,"url":null,"abstract":"This paper discusses the multi-content disentanglement issue in unsupervised image transfer model. Image transfer based on generative model such as VAE1 or GAN2 can be defined as mapping data from source domain to target domain. Existing disentanglement methods have focused on separating elements of latent vector to distinguish content and style information from an image. However, since it has focused on extracting information from all pixels, it is hard to perform image transfer while controlling specific contents. To solve this problem, image transfer which is able to control a specific content disentanglement has been suggested recently. In this paper, by adapting the disentanglement concept to control various specific contents in a image, we propose a suitable architecture for image transfer task such as adding or subtracting multiple contents. In addition, we also propose an adversarially-learned auxiliary discriminator to further improve the quality of synthesized images from the multi-content disentanglement method. Based on the proposed method, we can generate images by controlling two contents from the CelebA dataset, and prove that we can attach specific content more clearly with auxiliary discriminator.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"30 5","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3400286.3418250","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper discusses the multi-content disentanglement issue in unsupervised image transfer model. Image transfer based on generative model such as VAE1 or GAN2 can be defined as mapping data from source domain to target domain. Existing disentanglement methods have focused on separating elements of latent vector to distinguish content and style information from an image. However, since it has focused on extracting information from all pixels, it is hard to perform image transfer while controlling specific contents. To solve this problem, image transfer which is able to control a specific content disentanglement has been suggested recently. In this paper, by adapting the disentanglement concept to control various specific contents in a image, we propose a suitable architecture for image transfer task such as adding or subtracting multiple contents. In addition, we also propose an adversarially-learned auxiliary discriminator to further improve the quality of synthesized images from the multi-content disentanglement method. Based on the proposed method, we can generate images by controlling two contents from the CelebA dataset, and prove that we can attach specific content more clearly with auxiliary discriminator.