Jiazhe Miao, Tao Peng, Fei Fang, Xinrong Hu, Li Li
{"title":"TDGar-Ani: temporal motion fusion model and deformation correction network for enhancing garment animation details","authors":"Jiazhe Miao, Tao Peng, Fei Fang, Xinrong Hu, Li Li","doi":"10.1007/s00371-024-03575-0","DOIUrl":null,"url":null,"abstract":"<p>Garment simulation technology has widespread applications in fields such as virtual try-on and game animation. Traditional methods often require extensive manual annotation, leading to decreased efficiency. Recent methods that simulate garment from real videos often suffer from frame jitter problems due to a lack of consideration of temporal details. These approaches usually reconstruct human bodies and garments together without considering physical constraints, leading to unnatural stretching of garments during motion. To address these challenges, we propose TDGar-Ani. In terms of method design, we first propose a motion fusion module to optimize human motion sequences, resolving frame jitter issues. Subsequently, initial garment deformations are generated through physical constraints, combined with correction parameters outputted by a deformation correction network, ensuring coordinated deformations of garments and human bodies during motion, thereby enhancing the realism of simulation. Our experimental results demonstrate the applicability of the motion fusion module in capturing human motion from real videos. Simultaneously, the overall simulation results exhibit higher naturalness and realism, effectively improving alignment and deformation effects between garments and human body motion.</p>","PeriodicalId":501186,"journal":{"name":"The Visual Computer","volume":"65 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Visual Computer","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00371-024-03575-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Garment simulation technology has widespread applications in fields such as virtual try-on and game animation. Traditional methods often require extensive manual annotation, leading to decreased efficiency. Recent methods that simulate garment from real videos often suffer from frame jitter problems due to a lack of consideration of temporal details. These approaches usually reconstruct human bodies and garments together without considering physical constraints, leading to unnatural stretching of garments during motion. To address these challenges, we propose TDGar-Ani. In terms of method design, we first propose a motion fusion module to optimize human motion sequences, resolving frame jitter issues. Subsequently, initial garment deformations are generated through physical constraints, combined with correction parameters outputted by a deformation correction network, ensuring coordinated deformations of garments and human bodies during motion, thereby enhancing the realism of simulation. Our experimental results demonstrate the applicability of the motion fusion module in capturing human motion from real videos. Simultaneously, the overall simulation results exhibit higher naturalness and realism, effectively improving alignment and deformation effects between garments and human body motion.