使用胶囊gan进行文本到图像的合成

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Information (Switzerland) Pub Date : 2023-10-09 DOI:10.3390/info14100552
Maryam Omar, Hafeez Ur Rehman, Omar Bin Samin, Moutaz Alazab, Gianfranco Politano, Alfredo Benso
{"title":"使用胶囊gan进行文本到图像的合成","authors":"Maryam Omar, Hafeez Ur Rehman, Omar Bin Samin, Moutaz Alazab, Gianfranco Politano, Alfredo Benso","doi":"10.3390/info14100552","DOIUrl":null,"url":null,"abstract":"Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fréchet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 ± 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.","PeriodicalId":38479,"journal":{"name":"Information (Switzerland)","volume":"15 1","pages":"0"},"PeriodicalIF":2.4000,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CapGAN: Text-to-Image Synthesis Using Capsule GANs\",\"authors\":\"Maryam Omar, Hafeez Ur Rehman, Omar Bin Samin, Moutaz Alazab, Gianfranco Politano, Alfredo Benso\",\"doi\":\"10.3390/info14100552\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fréchet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 ± 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.\",\"PeriodicalId\":38479,\"journal\":{\"name\":\"Information (Switzerland)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2023-10-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information (Switzerland)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3390/info14100552\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information (Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/info14100552","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

文本到图像的合成是生成建模中最关键和最具挑战性的问题之一。它在自动学习领域,特别是在图像创建、修改、分析和优化方面具有重要意义。为了实现这一目标,过去已经提出了许多工作;然而,目前的方法仍然缺乏对场景的理解,特别是在复杂场景中合成连贯结构时。在这项工作中,我们提出了一个名为CapGAN的模型,用于从给定的单个文本语句合成图像,以解决复杂场景中全局连贯结构的问题。为此,使用跳过思想向量将给定文本编码为向量表示。该编码向量作为使用对抗过程进行图像合成的输入,其中同时训练两个模型,即:生成器(G)和鉴别器(D)。模型G生成假图像,而模型D试图从训练数据中预测样本是什么,而不是由G生成的。这项工作的概念新颖之处在于在鉴别器层面整合胶囊,使模型理解图像中物体不同实体之间的方向和相对空间关系。起始分数(IS)和fr起始距离(FID)作为CapGAN的定量评价指标。使用CapGAN生成的图像的IS记录为4.05±0.050,比使用传统gan合成的图像高约34%,而使用CapGAN计算的合成图像的FID得分为44.38,比以前最先进的模型提高了近9%。实验结果清楚地证明了所提出的CapGAN模型的有效性,该模型非常精通生成具有复杂场景的图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CapGAN: Text-to-Image Synthesis Using Capsule GANs
Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fréchet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 ± 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information (Switzerland)
Information (Switzerland) Computer Science-Information Systems
CiteScore
6.90
自引率
0.00%
发文量
515
审稿时长
11 weeks
期刊最新文献
Weakly Supervised Learning Approach for Implicit Aspect Extraction Science Mapping of Meta-Analysis in Agricultural Science An Integrated Time Series Prediction Model Based on Empirical Mode Decomposition and Two Attention Mechanisms Context-Aware Personalization: A Systems Engineering Framework Polarizing Topics on Twitter in the 2022 United States Elections
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1