Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar
{"title":"Assisting the autistic with improved facial expression recognition from mixed expressions","authors":"Washef Ahmed, S. Mitra, Kunal Chanda, Debasis Mazumdar","doi":"10.1109/NCVPRIPG.2013.6776229","DOIUrl":null,"url":null,"abstract":"People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCVPRIPG.2013.6776229","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
People suffering from autism have difficulty with recognizing other people's emotions and are therefore unable to react to it. Although there have been attempts aimed at developing a system for analyzing facial expressions for persons suffering from autism, very little has been explored for capturing one or more expressions from mixed expressions which are a mixture of two closely related expressions. This is essential for psychotherapeutic tool for analysis during counseling. This paper presents the idea of improving the recognition accuracy of one or more of the six prototypic expressions namely happiness, surprise, fear, disgust, sadness and anger from the mixture of two facial expressions. For this purpose a motion gradient based optical flow for muscle movement is computed between frames of a given video sequence. The computed optical flow is further used to generate feature vector as the signature of six basic prototypic expressions. Decision Tree generated rule base is used for clustering the feature vectors obtained in the video sequence and the result of clustering is used for recognition of expressions. The relative intensity of expressions for a given face present in a frame is measured. With the introduction of Component Based Analysis which is basically computing the feature vectors on the proposed regions of interest on a face, considerable improvement has been noticed regarding recognition of one or more expressions. The results have been validated against human judgement.
患有自闭症的人很难识别他人的情绪,因此无法对他人的情绪做出反应。尽管有人试图开发一种系统来分析自闭症患者的面部表情,但很少有人探索从混合表情中捕捉一种或多种表情,混合表情是两种密切相关的表情的混合物。这是必不可少的心理治疗工具,分析在咨询过程中。本文提出了从两种面部表情的混合中提高快乐、惊讶、恐惧、厌恶、悲伤和愤怒六种原型表情中的一种或多种识别精度的想法。为此,在给定视频序列的帧之间计算基于运动梯度的肌肉运动光流。利用计算得到的光流生成特征向量作为6个基本原型表达式的签名。使用决策树生成的规则库对视频序列中获得的特征向量进行聚类,并将聚类结果用于表情识别。在一帧中测量给定面部表情的相对强度。随着基于分量的分析(Component Based Analysis)的引入,在人脸感兴趣的区域上计算特征向量,在识别一个或多个表情方面已经有了相当大的改进。这些结果已经与人类的判断相违背。