{"title":"Music-stylized hierarchical dance synthesis with user control","authors":"","doi":"10.1016/j.vrih.2024.06.004","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><div>Synthesizing dance motions to match musical inputs is a significant challenge in animation research. Compared to functional human motions, such as locomotion, dance motions are creative and artistic, often influenced by music, and can be independent body language expressions. Dance choreography requires motion content to follow a general dance genre, whereas dance performances under musical influence are infused with diverse impromptu motion styles. Considering the high expressiveness and variations in space and time, providing accessible and effective user control for tuning dance motion styles remains an open problem.</div></div><div><h3>Methods</h3><div>In this study, we present a hierarchical framework that decouples the dance synthesis task into independent modules. We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences. This novel framework allows the individual modules to be trained separately. Because of the decoupling, dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments, and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network. Each module is replaceable at runtime, which adds flexibility to the synthesis of dance sequences.</div></div><div><h3>Results</h3><div>Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.</div></div>","PeriodicalId":33538,"journal":{"name":"Virtual Reality Intelligent Hardware","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Virtual Reality Intelligent Hardware","FirstCategoryId":"1093","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2096579624000342","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
Background
Synthesizing dance motions to match musical inputs is a significant challenge in animation research. Compared to functional human motions, such as locomotion, dance motions are creative and artistic, often influenced by music, and can be independent body language expressions. Dance choreography requires motion content to follow a general dance genre, whereas dance performances under musical influence are infused with diverse impromptu motion styles. Considering the high expressiveness and variations in space and time, providing accessible and effective user control for tuning dance motion styles remains an open problem.
Methods
In this study, we present a hierarchical framework that decouples the dance synthesis task into independent modules. We use a high-level choreography module built as a Transformer-based sequence model to predict the long-term structure of a dance genre and a low-level realization module that implements dance stylization and synchronization to match the musical input or user preferences. This novel framework allows the individual modules to be trained separately. Because of the decoupling, dance composition can fully utilize existing high-quality dance datasets that do not have musical accompaniments, and the dance implementation can conveniently incorporate user controls and edit motions through a decoder network. Each module is replaceable at runtime, which adds flexibility to the synthesis of dance sequences.
Results
Synthesized results demonstrate that our framework generates high-quality diverse dance motions that are well adapted to varying musical conditions and user controls.