{"title":"Synthesizing facial expressions for signing avatars using MPEG4 feature points","authors":"Y. Bouzid, Oussama El Ghoul, M. Jemni","doi":"10.1109/ICTA.2013.6815304","DOIUrl":null,"url":null,"abstract":"Thanks to the advances in virtual reality and human modeling techniques, signing avatars have become increasingly used in a wide variety of applications like the automatic translation of web pages, interactive e-learning environments and mobile phone services, with a view to improving the ability of hearing impaired people to access information and communicate with others. But, to truly understand and correctly interpret the signed utterances, the virtual characters should be capable of addressing all aspects of sign formation including facial features which play an important and crucial role in the communication of emotions and conveying specific meanings. In this context, we present in this paper a simple yet effective method to generate facial expressions for signing avatars basing on the physics-based muscle model introduced by Keith Waters. The main focus of our work is to automate the task of the muscle mapping on the face model in the correct anatomical positions as well as the detection of the jaw part by using a small set of MPEG-4 Feature Points of the given mesh.","PeriodicalId":188977,"journal":{"name":"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTA.2013.6815304","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Thanks to the advances in virtual reality and human modeling techniques, signing avatars have become increasingly used in a wide variety of applications like the automatic translation of web pages, interactive e-learning environments and mobile phone services, with a view to improving the ability of hearing impaired people to access information and communicate with others. But, to truly understand and correctly interpret the signed utterances, the virtual characters should be capable of addressing all aspects of sign formation including facial features which play an important and crucial role in the communication of emotions and conveying specific meanings. In this context, we present in this paper a simple yet effective method to generate facial expressions for signing avatars basing on the physics-based muscle model introduced by Keith Waters. The main focus of our work is to automate the task of the muscle mapping on the face model in the correct anatomical positions as well as the detection of the jaw part by using a small set of MPEG-4 Feature Points of the given mesh.