Seonho Lee, Jiho Choi, Seohyun Lim, Jiwook Kim, Hyunjung Shim
{"title":"Scribble-Guided Diffusion for Training-free Text-to-Image Generation","authors":"Seonho Lee, Jiho Choi, Seohyun Lim, Jiwook Kim, Hyunjung Shim","doi":"arxiv-2409.08026","DOIUrl":null,"url":null,"abstract":"Recent advancements in text-to-image diffusion models have demonstrated\nremarkable success, yet they often struggle to fully capture the user's intent.\nExisting approaches using textual inputs combined with bounding boxes or region\nmasks fall short in providing precise spatial guidance, often leading to\nmisaligned or unintended object orientation. To address these limitations, we\npropose Scribble-Guided Diffusion (ScribbleDiff), a training-free approach that\nutilizes simple user-provided scribbles as visual prompts to guide image\ngeneration. However, incorporating scribbles into diffusion models presents\nchallenges due to their sparse and thin nature, making it difficult to ensure\naccurate orientation alignment. To overcome these challenges, we introduce\nmoment alignment and scribble propagation, which allow for more effective and\nflexible alignment between generated images and scribble inputs. Experimental\nresults on the PASCAL-Scribble dataset demonstrate significant improvements in\nspatial control and consistency, showcasing the effectiveness of scribble-based\nguidance in diffusion models. Our code is available at\nhttps://github.com/kaist-cvml-lab/scribble-diffusion.","PeriodicalId":501130,"journal":{"name":"arXiv - CS - Computer Vision and Pattern Recognition","volume":"22 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recent advancements in text-to-image diffusion models have demonstrated
remarkable success, yet they often struggle to fully capture the user's intent.
Existing approaches using textual inputs combined with bounding boxes or region
masks fall short in providing precise spatial guidance, often leading to
misaligned or unintended object orientation. To address these limitations, we
propose Scribble-Guided Diffusion (ScribbleDiff), a training-free approach that
utilizes simple user-provided scribbles as visual prompts to guide image
generation. However, incorporating scribbles into diffusion models presents
challenges due to their sparse and thin nature, making it difficult to ensure
accurate orientation alignment. To overcome these challenges, we introduce
moment alignment and scribble propagation, which allow for more effective and
flexible alignment between generated images and scribble inputs. Experimental
results on the PASCAL-Scribble dataset demonstrate significant improvements in
spatial control and consistency, showcasing the effectiveness of scribble-based
guidance in diffusion models. Our code is available at
https://github.com/kaist-cvml-lab/scribble-diffusion.