Multimodal approach for cognitive task performance prediction from body postures, facial expressions and EEG signal

Ashwin Ramesh Babu, Akilesh Rajavenkatanarayanan, J. Brady, F. Makedon
{"title":"Multimodal approach for cognitive task performance prediction from body postures, facial expressions and EEG signal","authors":"Ashwin Ramesh Babu, Akilesh Rajavenkatanarayanan, J. Brady, F. Makedon","doi":"10.1145/3279810.3279849","DOIUrl":null,"url":null,"abstract":"Recent developments in computer vision and the emergence of wearable sensors have opened opportunities for the development of advanced and sophisticated techniques to enable multi-modal user assessment and personalized training which is important in educational, industrial training and rehabilitation applications. They have also paved way for the use of assistive robots to accurately assess human cognitive and physical skills. Assessment and training cannot be generalized as the requirement varies for every person and for every application. The ability of the system to adapt to the individual's needs and performance is essential for its effectiveness. In this paper, the focus is on task performance prediction which is an important parameter to consider for personalization. Several research works focus on how to predict task performance based on physiological and behavioral data. In this work, we follow a multi-modal approach where the system collects information from different modalities to predict performance based on (a) User's emotional state recognized from facial expressions(Behavioral data), (b) User's emotional state from body postures(Behavioral data) (c) task performance from EEG signals (Physiological data) while the person performs a robot-based cognitive task. This multi-modal approach of combining physiological data and behavioral data produces the highest accuracy of 87.5 percent, which outperforms the accuracy of prediction extracted from any single modality. In particular, this approach is useful in finding associations between facial expressions, body postures and brain signals while a person performs a cognitive task.","PeriodicalId":326513,"journal":{"name":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3279810.3279849","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

Recent developments in computer vision and the emergence of wearable sensors have opened opportunities for the development of advanced and sophisticated techniques to enable multi-modal user assessment and personalized training which is important in educational, industrial training and rehabilitation applications. They have also paved way for the use of assistive robots to accurately assess human cognitive and physical skills. Assessment and training cannot be generalized as the requirement varies for every person and for every application. The ability of the system to adapt to the individual's needs and performance is essential for its effectiveness. In this paper, the focus is on task performance prediction which is an important parameter to consider for personalization. Several research works focus on how to predict task performance based on physiological and behavioral data. In this work, we follow a multi-modal approach where the system collects information from different modalities to predict performance based on (a) User's emotional state recognized from facial expressions(Behavioral data), (b) User's emotional state from body postures(Behavioral data) (c) task performance from EEG signals (Physiological data) while the person performs a robot-based cognitive task. This multi-modal approach of combining physiological data and behavioral data produces the highest accuracy of 87.5 percent, which outperforms the accuracy of prediction extracted from any single modality. In particular, this approach is useful in finding associations between facial expressions, body postures and brain signals while a person performs a cognitive task.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于身体姿势、面部表情和脑电图信号的认知任务表现预测的多模态方法
计算机视觉的最新发展和可穿戴传感器的出现为发展先进和复杂的技术提供了机会,使多模态用户评估和个性化培训成为可能,这在教育、工业培训和康复应用中很重要。它们还为使用辅助机器人准确评估人类的认知和身体技能铺平了道路。评估和培训不能一概而论,因为每个人和每个申请的要求都是不同的。系统适应个人需要和表现的能力对其有效性至关重要。在本文中,重点研究任务性能预测,这是个性化需要考虑的一个重要参数。一些研究工作集中在如何根据生理和行为数据预测任务表现。在这项工作中,我们采用了一种多模态方法,系统从不同模态收集信息,以基于(a)用户从面部表情中识别的情绪状态(行为数据),(b)用户从身体姿势中识别的情绪状态(行为数据),(c)当人执行基于机器人的认知任务时,从脑电图信号中获得的任务表现(生理数据)来预测性能。这种结合生理数据和行为数据的多模态方法产生了87.5%的最高准确率,超过了从任何单一模态提取的预测准确性。特别是,这种方法在发现一个人执行认知任务时面部表情、身体姿势和大脑信号之间的联系时非常有用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Rule-based learning for eye movement type detection Predicting group satisfaction in meeting discussions Multimodal approach for cognitive task performance prediction from body postures, facial expressions and EEG signal The role of emotion in problem solving: first results from observing chess Proceedings of the Workshop on Modeling Cognitive Processes from Multimodal Data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1