基于外部注意的双条件GAN语义图像合成

IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Connection Science Pub Date : 2023-10-04 DOI:10.1080/09540091.2023.2259120
Gang Liu, Qijun Zhou, Xiaoxiao Xie, Qingchen Yu
{"title":"基于外部注意的双条件GAN语义图像合成","authors":"Gang Liu, Qijun Zhou, Xiaoxiao Xie, Qingchen Yu","doi":"10.1080/09540091.2023.2259120","DOIUrl":null,"url":null,"abstract":"Although the existing semantic image synthesis methods based on generative adversarial networks (GANs) have achieved great success, the quality of the generated images still cannot achieve satisfactory results. This is mainly caused by two reasons. One reason is that the information in the semantic layout is sparse. Another reason is that a single constraint cannot effectively control the position relationship between objects in the generated image. To address the above problems, we propose a dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS). In DCSIS, the adaptive normalization method uses the one-hot encoded semantic layout to generate the first latent space and the external attention uses the RGB encoded semantic layout to generate the second latent space. Two latent spaces control the shape of objects and the positional relationship between objects in the generated image. The graph attention (GAT) is added to the generator to strengthen the relationship between different categories in the generated image. A graph convolutional segmentation network (GSeg) is designed to learn information for each category. Experiments on several challenging datasets demonstrate the advantages of our method over existing approaches, regarding both visual quality and the representative evaluating criteria.","PeriodicalId":50629,"journal":{"name":"Connection Science","volume":"58 1","pages":"0"},"PeriodicalIF":3.2000,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dual conditional GAN based on external attention for semantic image synthesis\",\"authors\":\"Gang Liu, Qijun Zhou, Xiaoxiao Xie, Qingchen Yu\",\"doi\":\"10.1080/09540091.2023.2259120\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Although the existing semantic image synthesis methods based on generative adversarial networks (GANs) have achieved great success, the quality of the generated images still cannot achieve satisfactory results. This is mainly caused by two reasons. One reason is that the information in the semantic layout is sparse. Another reason is that a single constraint cannot effectively control the position relationship between objects in the generated image. To address the above problems, we propose a dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS). In DCSIS, the adaptive normalization method uses the one-hot encoded semantic layout to generate the first latent space and the external attention uses the RGB encoded semantic layout to generate the second latent space. Two latent spaces control the shape of objects and the positional relationship between objects in the generated image. The graph attention (GAT) is added to the generator to strengthen the relationship between different categories in the generated image. A graph convolutional segmentation network (GSeg) is designed to learn information for each category. Experiments on several challenging datasets demonstrate the advantages of our method over existing approaches, regarding both visual quality and the representative evaluating criteria.\",\"PeriodicalId\":50629,\"journal\":{\"name\":\"Connection Science\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2023-10-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Connection Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/09540091.2023.2259120\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Connection Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/09540091.2023.2259120","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

虽然现有的基于生成式对抗网络(GANs)的语义图像合成方法已经取得了很大的成功,但生成的图像质量仍然不能达到令人满意的效果。这主要由两个原因造成。一个原因是语义布局中的信息是稀疏的。另一个原因是单一约束不能有效控制生成图像中物体之间的位置关系。为了解决上述问题,我们提出了一种基于外部关注的语义图像合成双条件GAN (DCSIS)。在DCSIS中,自适应归一化方法使用单热编码语义布局生成第一潜空间,外部注意使用RGB编码语义布局生成第二潜空间。两个隐空间控制着生成图像中物体的形状和物体之间的位置关系。在生成器中加入图注意(GAT)来加强生成图像中不同类别之间的关系。设计了一个图卷积分割网络(GSeg)来学习每个类别的信息。在几个具有挑战性的数据集上的实验证明了我们的方法在视觉质量和代表性评估标准方面优于现有方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Dual conditional GAN based on external attention for semantic image synthesis
Although the existing semantic image synthesis methods based on generative adversarial networks (GANs) have achieved great success, the quality of the generated images still cannot achieve satisfactory results. This is mainly caused by two reasons. One reason is that the information in the semantic layout is sparse. Another reason is that a single constraint cannot effectively control the position relationship between objects in the generated image. To address the above problems, we propose a dual-conditional GAN with based on an external attention for semantic image synthesis (DCSIS). In DCSIS, the adaptive normalization method uses the one-hot encoded semantic layout to generate the first latent space and the external attention uses the RGB encoded semantic layout to generate the second latent space. Two latent spaces control the shape of objects and the positional relationship between objects in the generated image. The graph attention (GAT) is added to the generator to strengthen the relationship between different categories in the generated image. A graph convolutional segmentation network (GSeg) is designed to learn information for each category. Experiments on several challenging datasets demonstrate the advantages of our method over existing approaches, regarding both visual quality and the representative evaluating criteria.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Connection Science
Connection Science 工程技术-计算机:理论方法
CiteScore
6.50
自引率
39.60%
发文量
94
审稿时长
3 months
期刊介绍: Connection Science is an interdisciplinary journal dedicated to exploring the convergence of the analytic and synthetic sciences, including neuroscience, computational modelling, artificial intelligence, machine learning, deep learning, Database, Big Data, quantum computing, Blockchain, Zero-Knowledge, Internet of Things, Cybersecurity, and parallel and distributed computing. A strong focus is on the articles arising from connectionist, probabilistic, dynamical, or evolutionary approaches in aspects of Computer Science, applied applications, and systems-level computational subjects that seek to understand models in science and engineering.
期刊最新文献
Devising single in-out long short-term memory univariate models for predicting the electricity price on the day-ahead markets A continual learning framework to train robust image recognition models by adversarial training and knowledge distillation IPFS-blockchain-based delegation model for internet of medical robotics things telesurgery system Toward cost-effective quantum circuit simulation with performance tuning techniques ERAM-EE: Efficient resource allocation and management strategies with energy efficiency under fog–internet of things environments
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1