{"title":"WDANet: Exploring Stylized Animation via Diffusion Model for Woodcut-Style Design","authors":"Yangchunxue Ou, Jingjun Xu","doi":"10.1002/cav.70007","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Stylized animation strives for innovation and bold visual creativity. Integrating the inherent strong visual impact and color contrast of woodcut style into such animations is both appealing and challenging, especially during the design phase. Traditional woodcut methods, hand-drawing, and previous computer-aided techniques face challenges such as dwindling design inspiration, lengthy production times, and complex adjustment procedures. To address these issues, we propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet). Our research is the first to use diffusion models to streamline the woodcut-style design process. We curate the Woodcut-62 dataset, which features works from 62 renowned historical artists, to train WDANet in capturing and learning the aesthetic nuances of woodcut prints. WDANet, based on the denoising U-Net, effectively decouples content and style features. It allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. Quantitative and qualitative analyses, along with user studies, confirm that WDANet outperforms current state-of-the-art methods in generating woodcut-style images, demonstrating its value as a design aid.</p>\n </div>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"36 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.70007","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Stylized animation strives for innovation and bold visual creativity. Integrating the inherent strong visual impact and color contrast of woodcut style into such animations is both appealing and challenging, especially during the design phase. Traditional woodcut methods, hand-drawing, and previous computer-aided techniques face challenges such as dwindling design inspiration, lengthy production times, and complex adjustment procedures. To address these issues, we propose a novel network framework, the Woodcut-style Design Assistant Network (WDANet). Our research is the first to use diffusion models to streamline the woodcut-style design process. We curate the Woodcut-62 dataset, which features works from 62 renowned historical artists, to train WDANet in capturing and learning the aesthetic nuances of woodcut prints. WDANet, based on the denoising U-Net, effectively decouples content and style features. It allows users to input or slightly modify a text description to quickly generate accurate, high-quality woodcut-style designs, saving time and offering flexibility. Quantitative and qualitative analyses, along with user studies, confirm that WDANet outperforms current state-of-the-art methods in generating woodcut-style images, demonstrating its value as a design aid.
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.