MagicStyle: Portrait Stylization Based on Reference Image

Zhaoli Deng, Kaibin Zhou, Fanyi Wang, Zhenpeng Mi
{"title":"MagicStyle: Portrait Stylization Based on Reference Image","authors":"Zhaoli Deng, Kaibin Zhou, Fanyi Wang, Zhenpeng Mi","doi":"arxiv-2409.08156","DOIUrl":null,"url":null,"abstract":"The development of diffusion models has significantly advanced the research\non image stylization, particularly in the area of stylizing a content image\nbased on a given style image, which has attracted many scholars. The main\nchallenge in this reference image stylization task lies in how to maintain the\ndetails of the content image while incorporating the color and texture features\nof the style image. This challenge becomes even more pronounced when the\ncontent image is a portrait which has complex textural details. To address this\nchallenge, we propose a diffusion model-based reference image stylization\nmethod specifically for portraits, called MagicStyle. MagicStyle consists of\ntwo phases: Content and Style DDIM Inversion (CSDI) and Feature Fusion Forward\n(FFF). The CSDI phase involves a reverse denoising process, where DDIM\nInversion is performed separately on the content image and the style image,\nstoring the self-attention query, key and value features of both images during\nthe inversion process. The FFF phase executes forward denoising, harmoniously\nintegrating the texture and color information from the pre-stored feature\nqueries, keys and values into the diffusion generation process based on our\nWell-designed Feature Fusion Attention (FFA). We conducted comprehensive\ncomparative and ablation experiments to validate the effectiveness of our\nproposed MagicStyle and FFA.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08156","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The development of diffusion models has significantly advanced the research on image stylization, particularly in the area of stylizing a content image based on a given style image, which has attracted many scholars. The main challenge in this reference image stylization task lies in how to maintain the details of the content image while incorporating the color and texture features of the style image. This challenge becomes even more pronounced when the content image is a portrait which has complex textural details. To address this challenge, we propose a diffusion model-based reference image stylization method specifically for portraits, called MagicStyle. MagicStyle consists of two phases: Content and Style DDIM Inversion (CSDI) and Feature Fusion Forward (FFF). The CSDI phase involves a reverse denoising process, where DDIM Inversion is performed separately on the content image and the style image, storing the self-attention query, key and value features of both images during the inversion process. The FFF phase executes forward denoising, harmoniously integrating the texture and color information from the pre-stored feature queries, keys and values into the diffusion generation process based on our Well-designed Feature Fusion Attention (FFA). We conducted comprehensive comparative and ablation experiments to validate the effectiveness of our proposed MagicStyle and FFA.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MagicStyle:基于参考图像的肖像风格化
扩散模型的发展极大地推动了图像风格化的研究,尤其是在基于给定风格图像的内容图像风格化领域,吸引了众多学者的关注。这种参考图像风格化任务的主要挑战在于如何在保持内容图像细节的同时融入风格图像的色彩和纹理特征。当内容图像是具有复杂纹理细节的肖像时,这一挑战就更加突出。为了解决这一难题,我们提出了一种基于扩散模型的参考图像风格化方法,专门用于人像,称为 MagicStyle。MagicStyle 包括两个阶段:内容与风格 DDIM 反转(CSDI)和特征前向融合(FFF)。CSDI 阶段涉及反向去噪过程,即分别对内容图像和风格图像进行 DDIM 反演,在反演过程中存储两幅图像的自注意力查询、关键特征和值特征。FFF 阶段执行前向去噪,根据我们精心设计的特征融合注意力(FFA),将预先存储的特征查询、键和值中的纹理和颜色信息和谐地整合到扩散生成过程中。我们进行了全面的比较和消融实验,以验证我们提出的 MagicStyle 和 FFA 的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively Multi-Person 3D Human Motion Forecasting with Scene Context Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Precise Forecasting of Sky Images Using Spatial Warping JEAN: Joint Expression and Audio-guided NeRF-based Talking Face Generation Applications of Knowledge Distillation in Remote Sensing: A Survey
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1