{"title":"动态毛发数据的时空编辑","authors":"Yijie Wu, Yongtang Bao, Yue Qi","doi":"10.1109/ICVRV.2017.00077","DOIUrl":null,"url":null,"abstract":"Hair plays a unique role in depicting a person's character. Currently, most hair simulation techniques require a lot of computation time, or rely on complex capture settings. Editing and reusing of existing hair model data are very important topics in computer graphics. In this paper, we present a spatialtemporal editing technique for dynamic hair data. This method can generate a longer or even infinite length sequence of hair motion according to its motion trend from a short input. Firstly, we build spatial-temporal neighborhood information about input hair data. We then initialize the output according to the input exemplar and output constraints, and optimize the output through iterative search and assignment steps. To make the method be more efficient, we select a sparse part of the hair as the guide hair to simplify the model, and interpolate a full set of hair after the synthesis. Results show that our method can deal with a variety of hairstyles and different way of motions.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spatial-Temporal Editing for Dynamic Hair Data\",\"authors\":\"Yijie Wu, Yongtang Bao, Yue Qi\",\"doi\":\"10.1109/ICVRV.2017.00077\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hair plays a unique role in depicting a person's character. Currently, most hair simulation techniques require a lot of computation time, or rely on complex capture settings. Editing and reusing of existing hair model data are very important topics in computer graphics. In this paper, we present a spatialtemporal editing technique for dynamic hair data. This method can generate a longer or even infinite length sequence of hair motion according to its motion trend from a short input. Firstly, we build spatial-temporal neighborhood information about input hair data. We then initialize the output according to the input exemplar and output constraints, and optimize the output through iterative search and assignment steps. To make the method be more efficient, we select a sparse part of the hair as the guide hair to simplify the model, and interpolate a full set of hair after the synthesis. Results show that our method can deal with a variety of hairstyles and different way of motions.\",\"PeriodicalId\":187934,\"journal\":{\"name\":\"2017 International Conference on Virtual Reality and Visualization (ICVRV)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 International Conference on Virtual Reality and Visualization (ICVRV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICVRV.2017.00077\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICVRV.2017.00077","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hair plays a unique role in depicting a person's character. Currently, most hair simulation techniques require a lot of computation time, or rely on complex capture settings. Editing and reusing of existing hair model data are very important topics in computer graphics. In this paper, we present a spatialtemporal editing technique for dynamic hair data. This method can generate a longer or even infinite length sequence of hair motion according to its motion trend from a short input. Firstly, we build spatial-temporal neighborhood information about input hair data. We then initialize the output according to the input exemplar and output constraints, and optimize the output through iterative search and assignment steps. To make the method be more efficient, we select a sparse part of the hair as the guide hair to simplify the model, and interpolate a full set of hair after the synthesis. Results show that our method can deal with a variety of hairstyles and different way of motions.