Towards Objective Assessment of Movie Trailer Quality Using Human Electroencephalogram and Facial Recognition

Qing Wu, Wenbing Zhao, Tessadori Jacopo
{"title":"Towards Objective Assessment of Movie Trailer Quality Using Human Electroencephalogram and Facial Recognition","authors":"Qing Wu, Wenbing Zhao, Tessadori Jacopo","doi":"10.1109/EIT.2018.8500283","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel framework to objectively evaluate the quality of movie trailers by fusing two sensing modalities: (1) Human Electroencephalogram (EEG), and (2) computer-vision based facial expression recognition. The EEG sensing data are acquired via a cap instrumented with a set of 4-channel EEG sensors from the OpenBCI Ganglion board. The facial expressions are captured while a user is watching a movie trailer using a regular webcam to help establish the context for EEG analysis. On their own, facial expressions reveal how engaged a user is while watching a movie trailer. Additionally, facial expression data help us identify situations where noises caused by muscle movement in EEG data. Using a shallow neural network, we classify facial expressions into two categories: positive and negative emotions. A quarter-central decision making strategy model is used to analyze EEG signals with a low pass filter activated by time stamp when large human movements are detected. A small human subject test showed that the adaptive analysis method can achieve higher accuracy than that obtained via EEG alone. Besides for movie trailer evaluation, this framework can be utilized in the future towards remote training evaluation, wearable device personalization, and assisting paralyzed people to communicate with others.","PeriodicalId":188414,"journal":{"name":"2018 IEEE International Conference on Electro/Information Technology (EIT)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Electro/Information Technology (EIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EIT.2018.8500283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this paper, we propose a novel framework to objectively evaluate the quality of movie trailers by fusing two sensing modalities: (1) Human Electroencephalogram (EEG), and (2) computer-vision based facial expression recognition. The EEG sensing data are acquired via a cap instrumented with a set of 4-channel EEG sensors from the OpenBCI Ganglion board. The facial expressions are captured while a user is watching a movie trailer using a regular webcam to help establish the context for EEG analysis. On their own, facial expressions reveal how engaged a user is while watching a movie trailer. Additionally, facial expression data help us identify situations where noises caused by muscle movement in EEG data. Using a shallow neural network, we classify facial expressions into two categories: positive and negative emotions. A quarter-central decision making strategy model is used to analyze EEG signals with a low pass filter activated by time stamp when large human movements are detected. A small human subject test showed that the adaptive analysis method can achieve higher accuracy than that obtained via EEG alone. Besides for movie trailer evaluation, this framework can be utilized in the future towards remote training evaluation, wearable device personalization, and assisting paralyzed people to communicate with others.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于人脑电图和人脸识别的电影预告片质量客观评价研究
在本文中,我们提出了一个新的框架,通过融合两种传感模式来客观评估电影预告片的质量:(1)人类脑电图(EEG)和(2)基于计算机视觉的面部表情识别。脑电图传感数据是通过一个带有一组来自OpenBCI神经节板的4通道脑电图传感器的帽来获取的。当用户在观看电影预告片时,面部表情被捕获,使用常规网络摄像头帮助建立脑电图分析的背景。就其本身而言,面部表情揭示了用户在观看电影预告片时的投入程度。此外,面部表情数据可以帮助我们识别脑电图数据中肌肉运动引起的噪音。利用浅层神经网络,我们将面部表情分为两类:积极情绪和消极情绪。采用四分之一中心决策策略模型,利用时间戳激活的低通滤波器对检测到的大动作脑电信号进行分析。一项小型人体实验表明,自适应分析方法比单独通过脑电图获得的分析结果具有更高的准确性。除了电影预告片评估之外,该框架未来还可用于远程培训评估、可穿戴设备个性化、辅助残疾人与他人沟通等方面。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Developing A Dynamic Queueing Model for The Airport Check-in Process Issues and Challenges in VANET Routing Protocols Depiction of a Circulated Double Psi-Shaped Microstrip Antenna for Ku-Band Satellite Applications A Generic Approach CNN-Based Camera Identification for Manipulated Images Intelligent System Demonstrator for Secure Luggage Handling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1