{"title":"Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals","authors":"En-Jui Chang, Abbas Rahimi, L. Benini, A. Wu","doi":"10.1109/AICAS.2019.8771622","DOIUrl":null,"url":null,"abstract":"To interact naturally and achieve mutual sympathy between humans and machines, emotion recognition is one of the most important function to realize advanced human-computer interaction devices. Due to the high correlation between emotion and involuntary physiological changes, physiological signals are a prime candidate for emotion analysis. However, due to the need of a huge amount of training data for a high-quality machine learning model, computational complexity becomes a major bottleneck. To overcome this issue, brain-inspired hyperdimensional (HD) computing, an energy-efficient and fast learning computational paradigm, has a high potential to achieve a balance between accuracy and the amount of necessary training data. We propose an HD Computing-based Multimodality Emotion Recognition (HDC-MER). HDCMER maps real-valued features to binary HD vectors using a random nonlinear function, and further encodes them over time, and fuses across different modalities including GSR, ECG, and EEG. The experimental results show that, compared to the best method using the full training data, HDC-MER achieves higher classification accuracy for both valence (83.2% vs. 80.1%) and arousal (70.1% vs. 68.4%) using only 1/4 training data. HDC-MER also achieves at least 5% higher averaged accuracy compared to all the other methods in any point along the learning curve.","PeriodicalId":273095,"journal":{"name":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"48","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICAS.2019.8771622","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 48
Abstract
To interact naturally and achieve mutual sympathy between humans and machines, emotion recognition is one of the most important function to realize advanced human-computer interaction devices. Due to the high correlation between emotion and involuntary physiological changes, physiological signals are a prime candidate for emotion analysis. However, due to the need of a huge amount of training data for a high-quality machine learning model, computational complexity becomes a major bottleneck. To overcome this issue, brain-inspired hyperdimensional (HD) computing, an energy-efficient and fast learning computational paradigm, has a high potential to achieve a balance between accuracy and the amount of necessary training data. We propose an HD Computing-based Multimodality Emotion Recognition (HDC-MER). HDCMER maps real-valued features to binary HD vectors using a random nonlinear function, and further encodes them over time, and fuses across different modalities including GSR, ECG, and EEG. The experimental results show that, compared to the best method using the full training data, HDC-MER achieves higher classification accuracy for both valence (83.2% vs. 80.1%) and arousal (70.1% vs. 68.4%) using only 1/4 training data. HDC-MER also achieves at least 5% higher averaged accuracy compared to all the other methods in any point along the learning curve.
为了实现人与机器之间的自然交互和相互同情,情感识别是实现先进人机交互设备的重要功能之一。由于情绪与非自愿生理变化之间的高度相关性,生理信号是情绪分析的主要候选者。然而,由于一个高质量的机器学习模型需要大量的训练数据,计算复杂性成为一个主要的瓶颈。为了克服这一问题,脑启发的超维计算(HD)作为一种高效且快速的学习计算范式,在准确性和必要的训练数据量之间取得平衡方面具有很大的潜力。我们提出了一种基于高清计算的多模态情感识别(HDC-MER)。HDCMER使用随机非线性函数将实值特征映射到二进制高清矢量,并随着时间的推移对它们进行进一步编码,并融合不同的模态,包括GSR, ECG和EEG。实验结果表明,与使用完整训练数据的最佳方法相比,仅使用1/4训练数据的HDC-MER在效价(83.2% vs. 80.1%)和唤醒(70.1% vs. 68.4%)两方面都取得了更高的分类准确率。与其他方法相比,HDC-MER在学习曲线的任何一点上的平均精度至少高出5%。