{"title":"EZIGen:通过精确的主体编码和解耦引导,增强零镜头主体驱动图像生成功能","authors":"Zicheng Duan, Yuxuan Ding, Chenhui Gou, Ziqin Zhou, Ethan Smith, Lingqiao Liu","doi":"arxiv-2409.08091","DOIUrl":null,"url":null,"abstract":"Zero-shot subject-driven image generation aims to produce images that\nincorporate a subject from a given example image. The challenge lies in\npreserving the subject's identity while aligning with the text prompt, which\noften requires modifying certain aspects of the subject's appearance. Despite\nadvancements in diffusion model based methods, existing approaches still\nstruggle to balance identity preservation with text prompt alignment. In this\nstudy, we conducted an in-depth investigation into this issue and uncovered key\ninsights for achieving effective identity preservation while maintaining a\nstrong balance. Our key findings include: (1) the design of the subject image\nencoder significantly impacts identity preservation quality, and (2) generating\nan initial layout is crucial for both text alignment and identity preservation.\nBuilding on these insights, we introduce a new approach called EZIGen, which\nemploys two main strategies: a carefully crafted subject image Encoder based on\nthe UNet architecture of the pretrained Stable Diffusion model to ensure\nhigh-quality identity transfer, following a process that decouples the guidance\nstages and iteratively refines the initial image layout. Through these\nstrategies, EZIGen achieves state-of-the-art results on multiple subject-driven\nbenchmarks with a unified model and 100 times less training data.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EZIGen: Enhancing zero-shot subject-driven image generation with precise subject encoding and decoupled guidance\",\"authors\":\"Zicheng Duan, Yuxuan Ding, Chenhui Gou, Ziqin Zhou, Ethan Smith, Lingqiao Liu\",\"doi\":\"arxiv-2409.08091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Zero-shot subject-driven image generation aims to produce images that\\nincorporate a subject from a given example image. The challenge lies in\\npreserving the subject's identity while aligning with the text prompt, which\\noften requires modifying certain aspects of the subject's appearance. Despite\\nadvancements in diffusion model based methods, existing approaches still\\nstruggle to balance identity preservation with text prompt alignment. In this\\nstudy, we conducted an in-depth investigation into this issue and uncovered key\\ninsights for achieving effective identity preservation while maintaining a\\nstrong balance. Our key findings include: (1) the design of the subject image\\nencoder significantly impacts identity preservation quality, and (2) generating\\nan initial layout is crucial for both text alignment and identity preservation.\\nBuilding on these insights, we introduce a new approach called EZIGen, which\\nemploys two main strategies: a carefully crafted subject image Encoder based on\\nthe UNet architecture of the pretrained Stable Diffusion model to ensure\\nhigh-quality identity transfer, following a process that decouples the guidance\\nstages and iteratively refines the initial image layout. Through these\\nstrategies, EZIGen achieves state-of-the-art results on multiple subject-driven\\nbenchmarks with a unified model and 100 times less training data.\",\"PeriodicalId\":501130,\"journal\":{\"name\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Vision and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.08091\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
EZIGen: Enhancing zero-shot subject-driven image generation with precise subject encoding and decoupled guidance
Zero-shot subject-driven image generation aims to produce images that
incorporate a subject from a given example image. The challenge lies in
preserving the subject's identity while aligning with the text prompt, which
often requires modifying certain aspects of the subject's appearance. Despite
advancements in diffusion model based methods, existing approaches still
struggle to balance identity preservation with text prompt alignment. In this
study, we conducted an in-depth investigation into this issue and uncovered key
insights for achieving effective identity preservation while maintaining a
strong balance. Our key findings include: (1) the design of the subject image
encoder significantly impacts identity preservation quality, and (2) generating
an initial layout is crucial for both text alignment and identity preservation.
Building on these insights, we introduce a new approach called EZIGen, which
employs two main strategies: a carefully crafted subject image Encoder based on
the UNet architecture of the pretrained Stable Diffusion model to ensure
high-quality identity transfer, following a process that decouples the guidance
stages and iteratively refines the initial image layout. Through these
strategies, EZIGen achieves state-of-the-art results on multiple subject-driven
benchmarks with a unified model and 100 times less training data.