{"title":"基于观察到的3D语音动态的人脸动画","authors":"Gregor A. Kalberer, L. Gool","doi":"10.1109/CA.2001.982373","DOIUrl":null,"url":null,"abstract":"Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.","PeriodicalId":244191,"journal":{"name":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"49","resultStr":"{\"title\":\"Face animation based on observed 3D speech dynamics\",\"authors\":\"Gregor A. Kalberer, L. Gool\",\"doi\":\"10.1109/CA.2001.982373\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.\",\"PeriodicalId\":244191,\"journal\":{\"name\":\"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"49\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CA.2001.982373\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Computer Animation 2001. Fourteenth Conference on Computer Animation (Cat. No.01TH8596)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CA.2001.982373","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Face animation based on observed 3D speech dynamics
Realistic face animation is especially hard as we are all experts in the perception and interpretation of face dynamics. One approach is to simulate facial anatomy. Alternatively, animation can be based on first observing the visible 3D dynamics, extracting the basic modes, and then putting these together according to the required performance. This is the strategy followed in this paper, which focuses on speech. The approach follows a kind of bootstrap procedure. First, 3D shape statistics are learned from a talking face with a relatively small number of markers. A 3D reconstruction is produced at temporal intervals of 1/25 s. A topological mask of the lower half of the face is fitted to the motion. Principal component analysis (PCA) of the mask shapes reduces the dimension of the mask shape space. The result is two-fold. On the one hand, the face can be animated (in our case, it can be made to speak new sentences). On the other hand, face dynamics can be tracked in 3D without markers for performance capture.