Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.

M. Bartlett, G. Littlewort, Ian R. Fasel, J. Movellan
{"title":"Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction.","authors":"M. Bartlett, G. Littlewort, Ian R. Fasel, J. Movellan","doi":"10.1109/CVPRW.2003.10057","DOIUrl":null,"url":null,"abstract":"Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"566","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2003 Conference on Computer Vision and Pattern Recognition Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2003.10057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 566

Abstract

Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life. Face to face communication is a real-time process operating at a a time scale in the order of 40 milliseconds. The level of uncertainty at this time scale is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present progress on one such perceptual primitive. The system automatically detects frontal faces in the video stream and codes them with respect to 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with boosting techniques [15, 2]. The expression recognizer receives image patches located by the face detector. A Gabor representation of the patch is formed and then processed by a bank of SVM classifiers. A novel combination of Adaboost and SVM's enhances performance. The system was tested on the Cohn-Kanade dataset of posed facial expressions [6]. The generalization performance to new subjects for a 7- way forced choice correct. Most interestingly the outputs of the classifier change smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and unobtrusive manner. The system has been deployed on a wide variety of platforms including Sony's Aibo pet robot, ATR's RoboVie, and CU animator, and is currently being evaluated for applications including automatic reading tutors, assessment of human-robot interaction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
实时人脸检测与面部表情识别:在人机交互中的发展与应用。
计算机动画代理和机器人为人机交互带来了社会维度,并迫使我们以新的方式思考如何在日常生活中使用计算机。面对面的交流是一个以40毫秒的时间尺度进行的实时过程。在这个时间尺度上的不确定性是相当大的,这使得人类和机器有必要依赖感官丰富的感知原语,而不是缓慢的符号推理过程。在本文中,我们介绍了一个这样的感知原语的进展。该系统自动检测视频流中的正面人脸,并根据7个维度对其进行实时编码:中性、愤怒、厌恶、恐惧、快乐、悲伤、惊讶。人脸识别器采用了一系列经过增强技术训练的特征检测器[15,2]。表情识别器接收人脸检测器定位的图像补丁。形成一个Gabor表示,然后由一组支持向量机分类器进行处理。Adaboost和SVM的新颖组合提高了性能。该系统在Cohn-Kanade摆姿面部表情数据集上进行了测试[6]。对新被试的泛化表现为7种方式的强迫选择正确。最有趣的是,分类器的输出作为时间的函数平滑地变化,以全自动和不引人注目的方式为编码面部表情动态提供了潜在的有价值的表示。该系统已经部署在各种各样的平台上,包括索尼的Aibo宠物机器人、ATR的RoboVie和CU动画师,目前正在评估自动阅读导师、人机交互评估等应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Factorization Approach for Activity Recognition Optical flow estimation in omnidirectional images using wavelet approach Reckless motion estimation from omnidirectional image and inertial measurements Statistical Error Propagation in 3D Modeling From Monocular Video Learning and Perceptual Interfaces
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1