{"title":"Lip feature extraction using motion, color, and edge information","authors":"R. Dansereau, C. Li, R. Goubran","doi":"10.1109/HAVE.2003.1244716","DOIUrl":null,"url":null,"abstract":"In this paper, we present a Markov random field based technique for extracting lip features from video using color and edge information. Motion between frames is used as an indicator to locate the approximate lip region, while color and edge information allow boundaries of naturally covered lips to be identified and segmented from the rest of the face. Using the lip region, geometric lip features are then extracted from the segmented lip area. The experimental results show that 96% accuracy is obtained in extracting six key lip feature points in typical talking head video sequences when the tongue is not visible in the scene, and 90% accuracy when the tongue is visible.","PeriodicalId":431267,"journal":{"name":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2nd IEEE Internatioal Workshop on Haptic, Audio and Visual Environments and Their Applications, 2003. HAVE 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HAVE.2003.1244716","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper, we present a Markov random field based technique for extracting lip features from video using color and edge information. Motion between frames is used as an indicator to locate the approximate lip region, while color and edge information allow boundaries of naturally covered lips to be identified and segmented from the rest of the face. Using the lip region, geometric lip features are then extracted from the segmented lip area. The experimental results show that 96% accuracy is obtained in extracting six key lip feature points in typical talking head video sequences when the tongue is not visible in the scene, and 90% accuracy when the tongue is visible.