One-for-All: Towards Universal Domain Translation With a Single StyleGAN

Yong Du;Jiahui Zhan;Xinzhe Li;Junyu Dong;Sheng Chen;Ming-Hsuan Yang;Shengfeng He
{"title":"One-for-All: Towards Universal Domain Translation With a Single StyleGAN","authors":"Yong Du;Jiahui Zhan;Xinzhe Li;Junyu Dong;Sheng Chen;Ming-Hsuan Yang;Shengfeng He","doi":"10.1109/TPAMI.2025.3530099","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains under conditions of limited training data and significant visual differences. The main idea behind our approach is leveraging the domain-neutral capabilities of CLIP as a bridging mechanism, while utilizing a separate module to extract abstract, domain-agnostic semantics from the embeddings of both the source and target realms. Fusing these abstract semantics with target-specific semantics results in a transformed embedding within the CLIP space. To bridge the gap between the disparate worlds of CLIP and StyleGAN, we introduce a new non-linear mapper, the CLIP2P mapper. Utilizing CLIP embeddings, this module is tailored to approximate the latent distribution in the StyleGAN's latent space, effectively acting as a connector between these two spaces. The proposed UniTranslator is versatile and capable of performing various tasks, including style mixing, stylization, and translations, even in visually challenging scenarios across different visual domains. Notably, UniTranslator generates high-quality translations that showcase domain relevance, diversity, and improved image quality. UniTranslator surpasses the performance of existing general-purpose models and performs well against specialized models in representative tasks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 4","pages":"2865-2881"},"PeriodicalIF":18.6000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10848371/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a novel translation model, UniTranslator, for transforming representations between visually distinct domains under conditions of limited training data and significant visual differences. The main idea behind our approach is leveraging the domain-neutral capabilities of CLIP as a bridging mechanism, while utilizing a separate module to extract abstract, domain-agnostic semantics from the embeddings of both the source and target realms. Fusing these abstract semantics with target-specific semantics results in a transformed embedding within the CLIP space. To bridge the gap between the disparate worlds of CLIP and StyleGAN, we introduce a new non-linear mapper, the CLIP2P mapper. Utilizing CLIP embeddings, this module is tailored to approximate the latent distribution in the StyleGAN's latent space, effectively acting as a connector between these two spaces. The proposed UniTranslator is versatile and capable of performing various tasks, including style mixing, stylization, and translations, even in visually challenging scenarios across different visual domains. Notably, UniTranslator generates high-quality translations that showcase domain relevance, diversity, and improved image quality. UniTranslator surpasses the performance of existing general-purpose models and performs well against specialized models in representative tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一网打尽:用单一风格广域网实现通用领域翻译
在本文中,我们提出了一种新的翻译模型unittranslator,用于在训练数据有限和视觉差异显著的情况下,在视觉上不同的域之间转换表示。我们的方法背后的主要思想是利用CLIP的领域中立功能作为桥接机制,同时利用一个单独的模块从源领域和目标领域的嵌入中提取抽象的、与领域无关的语义。将这些抽象语义与特定于目标的语义融合在一起,可以在CLIP空间内转换嵌入。为了弥合CLIP和StyleGAN这两个完全不同的世界之间的鸿沟,我们引入了一个新的非线性映射器,CLIP2P映射器。利用CLIP嵌入,该模块被定制为近似StyleGAN潜在空间中的潜在分布,有效地充当这两个空间之间的连接器。所建议的unittranslator是多功能的,能够执行各种任务,包括风格混合、风格化和翻译,甚至在跨不同视觉域的视觉挑战场景中也是如此。值得注意的是,unittranslator生成了高质量的翻译,展示了领域相关性、多样性和改进的图像质量。unittranslator超越了现有通用模型的性能,并在代表性任务中对专门模型表现良好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Spike Camera Optical Flow Estimation Based on Continuous Spike Streams. Bi-C2R: Bidirectional Continual Compatible Representation for Re-Indexing Free Lifelong Person Re-Identification. 2025 Reviewers List* Deep Robust Reversible Watermarking. Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1