EMO: real-time emotion recognition from single-eye images for resource-constrained eyewear devices

Hao Wu, Jinghao Feng, Xuejin Tian, Edward Sun, Yunxin Liu, Bo Dong, Fengyuan Xu, Sheng Zhong
{"title":"EMO: real-time emotion recognition from single-eye images for resource-constrained eyewear devices","authors":"Hao Wu, Jinghao Feng, Xuejin Tian, Edward Sun, Yunxin Liu, Bo Dong, Fengyuan Xu, Sheng Zhong","doi":"10.1145/3386901.3388917","DOIUrl":null,"url":null,"abstract":"Real-time user emotion recognition is highly desirable for many applications on eyewear devices like smart glasses. However, it is very challenging to enable this capability on such devices due to tightly constrained image contents (only eye-area images available from the on-device eye-tracking camera) and computing resources of the embedded system. In this paper, we propose and develop a novel system called EMO that can recognize, on top of a resource-limited eyewear device, real-time emotions of the user who wears it. Unlike most existing solutions that require whole-face images to recognize emotions, EMO only utilizes the single-eye-area images captured by the eye-tracking camera of the eyewear. To achieve this, we design a customized deep-learning network to effectively extract emotional features from input single-eye images and a personalized feature classifier to accurately identify a user's emotions. EMO also exploits the temporal locality and feature similarity among consecutive video frames of the eye-tracking camera to further reduce the recognition latency and system resource usage. We implement EMO on two hardware platforms and conduct comprehensive experimental evaluations. Our results demonstrate that EMO can continuously recognize seven-type emotions at 12.8 frames per second with a mean accuracy of 72.2%, significantly outperforming the state-of-the-art approach, and consume much fewer system resources.","PeriodicalId":345029,"journal":{"name":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","volume":"77 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"21","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 18th International Conference on Mobile Systems, Applications, and Services","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386901.3388917","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 21

Abstract

Real-time user emotion recognition is highly desirable for many applications on eyewear devices like smart glasses. However, it is very challenging to enable this capability on such devices due to tightly constrained image contents (only eye-area images available from the on-device eye-tracking camera) and computing resources of the embedded system. In this paper, we propose and develop a novel system called EMO that can recognize, on top of a resource-limited eyewear device, real-time emotions of the user who wears it. Unlike most existing solutions that require whole-face images to recognize emotions, EMO only utilizes the single-eye-area images captured by the eye-tracking camera of the eyewear. To achieve this, we design a customized deep-learning network to effectively extract emotional features from input single-eye images and a personalized feature classifier to accurately identify a user's emotions. EMO also exploits the temporal locality and feature similarity among consecutive video frames of the eye-tracking camera to further reduce the recognition latency and system resource usage. We implement EMO on two hardware platforms and conduct comprehensive experimental evaluations. Our results demonstrate that EMO can continuously recognize seven-type emotions at 12.8 frames per second with a mean accuracy of 72.2%, significantly outperforming the state-of-the-art approach, and consume much fewer system resources.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
EMO:用于资源受限的眼镜设备的单眼图像的实时情感识别
实时用户情感识别对于智能眼镜等眼镜设备的许多应用都是非常需要的。然而,由于图像内容(只能从设备上的眼动追踪摄像头获得眼睛区域图像)和嵌入式系统的计算资源受到严格限制,因此在此类设备上实现这一功能非常具有挑战性。在本文中,我们提出并开发了一种称为EMO的新系统,该系统可以在资源有限的眼镜设备上识别佩戴者的实时情绪。与大多数需要全脸图像来识别情绪的现有解决方案不同,EMO只利用眼镜上的眼动追踪摄像头捕获的单眼区域图像。为了实现这一目标,我们设计了一个定制的深度学习网络来有效地从输入的单眼图像中提取情感特征,并设计了一个个性化的特征分类器来准确识别用户的情感。EMO还利用眼动摄像头连续视频帧之间的时间局部性和特征相似性,进一步降低识别延迟和系统资源占用。我们在两个硬件平台上实现了EMO,并进行了全面的实验评估。我们的研究结果表明,EMO可以以每秒12.8帧的速度连续识别7种类型的情绪,平均准确率为72.2%,显著优于最先进的方法,并且消耗更少的系统资源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
DigiScatter Key sensor discovery for quality audit of air sensor networks EMO SonicPrint: a generally adoptable and secure fingerprint biometrics in smart devices Osprey demo: a mmwave approach to tire wear sensing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1