首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
Perception of congruent facial and kinesthetic expressions of emotions 知觉一致的面部和动觉的情绪表达
Yoren Gaffary, Jean-Claude Martin, M. Ammi
The use of virtual avatars, through facial or gestural expressions, is considered to be a main support for affective communication. Currently, different works have studied the potential of a kinesthetic channel for conveying such information. However, they still have not investigated the complementarity between visual and kinesthetic feedback to effectively convey emotion. This paper studies the relation between some emotional dimensions and the visual and kinesthetic modalities. The experimental results show that subjects used visual and kinesthetic feedbacks to evaluate the pleasure and the arousal dimensions, respectively. We also observed a link between the recognition rate of emotions expressed with the visual modality (resp. kinesthetic modality) and the magnitude of that emotion's pleasure dimension (resp. arousal dimension). These different results should help in the selection of feedbacks according to the features of the investigated emotion.
使用虚拟化身,通过面部或手势表达,被认为是情感交流的主要支持。目前,不同的作品已经研究了传递这些信息的动觉通道的潜力。然而,他们仍然没有研究视觉和动觉反馈之间的互补性,以有效地传达情感。本文研究了一些情感维度与视觉和动觉方式之间的关系。实验结果表明,被试分别使用视觉反馈和动觉反馈来评价愉悦和觉醒维度。我们还观察到情绪的识别率与视觉形态之间的联系。动觉模态)和那种情绪的愉悦维度的大小(参见。唤醒维度)。这些不同的结果应该有助于根据被调查情绪的特征选择反馈。
{"title":"Perception of congruent facial and kinesthetic expressions of emotions","authors":"Yoren Gaffary, Jean-Claude Martin, M. Ammi","doi":"10.1109/ACII.2015.7344697","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344697","url":null,"abstract":"The use of virtual avatars, through facial or gestural expressions, is considered to be a main support for affective communication. Currently, different works have studied the potential of a kinesthetic channel for conveying such information. However, they still have not investigated the complementarity between visual and kinesthetic feedback to effectively convey emotion. This paper studies the relation between some emotional dimensions and the visual and kinesthetic modalities. The experimental results show that subjects used visual and kinesthetic feedbacks to evaluate the pleasure and the arousal dimensions, respectively. We also observed a link between the recognition rate of emotions expressed with the visual modality (resp. kinesthetic modality) and the magnitude of that emotion's pleasure dimension (resp. arousal dimension). These different results should help in the selection of feedbacks according to the features of the investigated emotion.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"993-998"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84542449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cross-language acoustic emotion recognition: An overview and some tendencies 跨语言声音情感识别:综述及趋势
S. M. Feraru, Dagmar M. Schuller, Björn Schuller
Automatic emotion recognition from speech has matured close to the point where it reaches broader commercial interest. One of the last major limiting factors is the ability to deal with multilingual inputs as will be given in a real-life operating system in many if not most cases. As in real-life scenarios speech is often used mixed across languages more experience will be needed in performance effects of cross-language recognition. In this contribution we first provide an overview on languages covered in the research on emotion and speech finding that only roughly two thirds of native speakers' languages are so far touched upon. We thus next shed light on mis-matched vs matched condition emotion recognition across a variety of languages. By intention, we include less researched languages of more distant language families such as Burmese, Romanian or Turkish. Binary arousal and valence mapping is employed in order to be able to train and test across databases that have originally been labelled in diverse categories. In the result - as one may expect - arousal recognition works considerably better across languages than valence, and cross-language recognition falls considerably behind within-language recognition. However, within-language family recognition seems to provide an `emergency-solution' in case of missing language resources, and the observed notable differences depending on the combination of languages show a number of interesting effects.
语音的自动情感识别技术已经成熟到可以实现更广泛的商业利益。最后一个主要限制因素是处理多语言输入的能力,这在许多(如果不是大多数的话)实际操作系统中都有。由于在现实生活中,语音经常是跨语言混合使用的,因此跨语言识别的表现效果需要更多的经验。在这篇文章中,我们首先概述了情感和语言研究中涉及的语言,发现到目前为止,只有大约三分之二的母语被触及。因此,我们接下来阐明了跨各种语言的不匹配与匹配条件情感识别。有意地,我们包括较少研究的语言更遥远的语系,如缅甸语,罗马尼亚语或土耳其语。为了能够跨数据库进行训练和测试,采用了二元唤醒和价映射,这些数据库最初被标记为不同的类别。结果,正如人们所预料的那样,唤醒识别在不同语言之间的表现要比效价好得多,而跨语言识别则远远落后于语言内识别。然而,在缺少语言资源的情况下,语言族内部识别似乎提供了一种“紧急解决方案”,并且根据语言组合所观察到的显着差异显示了许多有趣的效果。
{"title":"Cross-language acoustic emotion recognition: An overview and some tendencies","authors":"S. M. Feraru, Dagmar M. Schuller, Björn Schuller","doi":"10.1109/ACII.2015.7344561","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344561","url":null,"abstract":"Automatic emotion recognition from speech has matured close to the point where it reaches broader commercial interest. One of the last major limiting factors is the ability to deal with multilingual inputs as will be given in a real-life operating system in many if not most cases. As in real-life scenarios speech is often used mixed across languages more experience will be needed in performance effects of cross-language recognition. In this contribution we first provide an overview on languages covered in the research on emotion and speech finding that only roughly two thirds of native speakers' languages are so far touched upon. We thus next shed light on mis-matched vs matched condition emotion recognition across a variety of languages. By intention, we include less researched languages of more distant language families such as Burmese, Romanian or Turkish. Binary arousal and valence mapping is employed in order to be able to train and test across databases that have originally been labelled in diverse categories. In the result - as one may expect - arousal recognition works considerably better across languages than valence, and cross-language recognition falls considerably behind within-language recognition. However, within-language family recognition seems to provide an `emergency-solution' in case of missing language resources, and the observed notable differences depending on the combination of languages show a number of interesting effects.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"17 1","pages":"125-131"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84557224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
Pain level recognition using kinematics and muscle activity for physical rehabilitation in chronic pain 利用运动学和肌肉活动识别慢性疼痛的疼痛水平
Temitayo A. Olugbade, N. Bianchi-Berthouze, Nicolai Marquardt, A. Williams
People with chronic musculoskeletal pain would benefit from technology that provides run-time personalized feedback and help adjust their physical exercise plan. However, increased pain during physical exercise, or anxiety about anticipated pain increase, may lead to setback and intensified sensitivity to pain. Our study investigates the possibility of detecting pain levels from the quality of body movement during two functional physical exercises. By analyzing recordings of kinematics and muscle activity, our feature optimization algorithms and machine learning techniques can automatically discriminate between people with low level pain and high level pain and control participants while exercising. Best results were obtained from feature set optimization algorithms: 94% and 80% for the full trunk flexion and sit-to-stand movements respectively using Support Vector Machines. As depression can affect pain experience, we included participants' depression scores on a standard questionnaire and this improved discrimination between the control participants and the people with pain when Random Forests were used.
患有慢性肌肉骨骼疼痛的人将受益于提供跑步时个性化反馈并帮助调整他们的体育锻炼计划的技术。然而,体育锻炼中疼痛的增加,或对预期疼痛增加的焦虑,可能导致挫折和对疼痛的敏感性增强。我们的研究探讨了在两种功能性体育锻炼中,通过身体运动的质量来检测疼痛程度的可能性。通过分析运动学和肌肉活动的记录,我们的特征优化算法和机器学习技术可以自动区分轻度疼痛和重度疼痛的人,并在锻炼时控制参与者。特征集优化算法获得的结果最好:使用支持向量机的躯干全屈曲和坐立运动分别达到94%和80%。由于抑郁可以影响疼痛体验,我们将参与者的抑郁得分纳入标准问卷,这提高了使用随机森林时对照参与者和疼痛患者之间的区别。
{"title":"Pain level recognition using kinematics and muscle activity for physical rehabilitation in chronic pain","authors":"Temitayo A. Olugbade, N. Bianchi-Berthouze, Nicolai Marquardt, A. Williams","doi":"10.1109/ACII.2015.7344578","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344578","url":null,"abstract":"People with chronic musculoskeletal pain would benefit from technology that provides run-time personalized feedback and help adjust their physical exercise plan. However, increased pain during physical exercise, or anxiety about anticipated pain increase, may lead to setback and intensified sensitivity to pain. Our study investigates the possibility of detecting pain levels from the quality of body movement during two functional physical exercises. By analyzing recordings of kinematics and muscle activity, our feature optimization algorithms and machine learning techniques can automatically discriminate between people with low level pain and high level pain and control participants while exercising. Best results were obtained from feature set optimization algorithms: 94% and 80% for the full trunk flexion and sit-to-stand movements respectively using Support Vector Machines. As depression can affect pain experience, we included participants' depression scores on a standard questionnaire and this improved discrimination between the control participants and the people with pain when Random Forests were used.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"8 1","pages":"243-249"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79952632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Real-time robust recognition of speakers' emotions and characteristics on mobile platforms 在移动平台上实时鲁棒识别说话人的情绪和特征
F. Eyben, Bernd Huber, E. Marchi, Dagmar M. Schuller, Björn Schuller
We demonstrate audEERING's sensAI technology running natively on low-resource mobile devices applied to emotion analytics and speaker characterisation tasks. A showcase application for the Android platform is provided, where au-dEERING's highly noise robust voice activity detection based on Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) is combined with our core emotion recognition and speaker characterisation engine natively on the mobile device. This eliminates the need for network connectivity and allows to perform robust speaker state and trait recognition efficiently in real-time without network transmission lags. Real-time factors are benchmarked for a popular mobile device to demonstrate the efficiency, and average response times are compared to a server based approach. The output of the emotion analysis is visualized graphically in the arousal and valence space alongside the emotion category and further speaker characteristics.
我们展示了audEERING的sensAI技术在低资源移动设备上本机运行,应用于情绪分析和说话人特征任务。提供了一个Android平台的展示应用程序,其中audeering基于长短期记忆递归神经网络(LSTM-RNN)的高噪声鲁棒语音活动检测与我们的核心情感识别和说话人特征引擎相结合,原生在移动设备上。这消除了对网络连接的需要,并允许在没有网络传输滞后的情况下有效地实时执行鲁棒的说话人状态和特征识别。实时因素对流行的移动设备进行基准测试,以证明效率,并将平均响应时间与基于服务器的方法进行比较。情绪分析的输出在唤醒和价态空间以及情绪类别和进一步的说话人特征中以图形方式可视化。
{"title":"Real-time robust recognition of speakers' emotions and characteristics on mobile platforms","authors":"F. Eyben, Bernd Huber, E. Marchi, Dagmar M. Schuller, Björn Schuller","doi":"10.1109/ACII.2015.7344658","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344658","url":null,"abstract":"We demonstrate audEERING's sensAI technology running natively on low-resource mobile devices applied to emotion analytics and speaker characterisation tasks. A showcase application for the Android platform is provided, where au-dEERING's highly noise robust voice activity detection based on Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN) is combined with our core emotion recognition and speaker characterisation engine natively on the mobile device. This eliminates the need for network connectivity and allows to perform robust speaker state and trait recognition efficiently in real-time without network transmission lags. Real-time factors are benchmarked for a popular mobile device to demonstrate the efficiency, and average response times are compared to a server based approach. The output of the emotion analysis is visualized graphically in the arousal and valence space alongside the emotion category and further speaker characteristics.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"28 1","pages":"778-780"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86693926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Affective handshake with a humanoid robot: How do participants perceive and combine its facial and haptic expressions? 与人形机器人的情感握手:参与者如何感知并结合其面部和触觉表情?
Mohamed Yacine Tsalamlal, Jean-Claude Martin, M. Ammi, A. Tapus, M. Amorim
This study presents an experiment highlighting how participants combine facial expressions and haptic feedback to perceive emotions when interacting with an expressive humanoid robot. Participants were asked to interact with the humanoid robot through a handshake behavior while looking at its facial expressions. Experimental data were examined within the information integration theory framework. Results revealed that participants combined Facial and Haptic cues additively to evaluate the Valence, Arousal, and Dominance dimensions. The relative importance of each modality was different across the emotional dimensions. Participants gave more importance to facial expressions when evaluating Valence. They gave more importance to haptic feedback when evaluating Arousal and Dominance.
本研究展示了一个实验,突出了参与者如何结合面部表情和触觉反馈来感知情感,当与具有表现力的人形机器人互动时。参与者被要求通过握手行为与人形机器人互动,同时观察机器人的面部表情。在信息集成理论框架下对实验数据进行了检验。结果显示,参与者将面部和触觉线索相加来评估效价、唤醒和优势维度。在情感维度上,每种情态的相对重要性是不同的。在评估效价时,参与者更重视面部表情。在评估唤醒和支配时,他们更重视触觉反馈。
{"title":"Affective handshake with a humanoid robot: How do participants perceive and combine its facial and haptic expressions?","authors":"Mohamed Yacine Tsalamlal, Jean-Claude Martin, M. Ammi, A. Tapus, M. Amorim","doi":"10.1109/ACII.2015.7344592","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344592","url":null,"abstract":"This study presents an experiment highlighting how participants combine facial expressions and haptic feedback to perceive emotions when interacting with an expressive humanoid robot. Participants were asked to interact with the humanoid robot through a handshake behavior while looking at its facial expressions. Experimental data were examined within the information integration theory framework. Results revealed that participants combined Facial and Haptic cues additively to evaluate the Valence, Arousal, and Dominance dimensions. The relative importance of each modality was different across the emotional dimensions. Participants gave more importance to facial expressions when evaluating Valence. They gave more importance to haptic feedback when evaluating Arousal and Dominance.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"334-340"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89725926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Towards incorporating affective feedback into context-aware intelligent environments 将情感反馈整合到上下文感知的智能环境中
D. Saha, Thomas L. Martin, R. Benjamin Knapp
Determining the relevance of services from intelligent environments is a critical step in implementing a reliable context-aware ambient intelligent system. Designing the provision of explicit indications to the system is effective in communicating this relevance, however, such explicit indications come at the cost of user's cognitive resources. In this work, we strive to create a novel pathway of implicit communication between the user and their ambient intelligence by employing user's stress as a feedback pathway to the intelligent system. In addition, following a few very recent works, we propose using proven laboratory stressors to collect ground truth data for stressed states. We present results from a preliminary pilot study which shows promise for creating this implicit channel of communication as well as proves the feasibility of using laboratory stressors as a reliable method of ground truth collection for stressed states.
确定来自智能环境的服务的相关性是实现可靠的上下文感知环境智能系统的关键步骤。设计向系统提供明确的指示可以有效地传达这种相关性,然而,这种明确的指示是以用户的认知资源为代价的。在这项工作中,我们努力通过使用用户的压力作为智能系统的反馈途径,在用户和他们的环境智能之间创建一种新的隐式通信途径。此外,根据最近的一些工作,我们建议使用经过验证的实验室压力源来收集压力状态的地面真实数据。我们提出了一项初步试点研究的结果,该研究显示了创建这种隐式通信通道的希望,并证明了使用实验室压力源作为压力状态下地面真相收集的可靠方法的可行性。
{"title":"Towards incorporating affective feedback into context-aware intelligent environments","authors":"D. Saha, Thomas L. Martin, R. Benjamin Knapp","doi":"10.1109/ACII.2015.7344550","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344550","url":null,"abstract":"Determining the relevance of services from intelligent environments is a critical step in implementing a reliable context-aware ambient intelligent system. Designing the provision of explicit indications to the system is effective in communicating this relevance, however, such explicit indications come at the cost of user's cognitive resources. In this work, we strive to create a novel pathway of implicit communication between the user and their ambient intelligence by employing user's stress as a feedback pathway to the intelligent system. In addition, following a few very recent works, we propose using proven laboratory stressors to collect ground truth data for stressed states. We present results from a preliminary pilot study which shows promise for creating this implicit channel of communication as well as proves the feasibility of using laboratory stressors as a reliable method of ground truth collection for stressed states.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"85 1","pages":"49-55"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78185478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Modeling head motion entrainment for prediction of couples' behavioral characteristics 基于头部运动夹带的情侣行为特征预测模型
Bo Xiao, P. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan
Our work examines the link between head motion entrainment of interacting couples and human expert's judgment on certain overall behavioral characteristics (e.g., Blame patterns). We employ a data-driven model that clusters head motion in an unsupervised manner into elementary types called kinemes. We propose three groups of similarity measures based on Kullback-Leibler divergence to model entrainment. We find that the divergence of the (joint) distribution of kinemes yields consistent and significant correlation with target behavior characteristics. The divergence of the conditional distribution of kinemes is shown to predict the polarity of the behavioral characteristics. We partly explain the strong correlations via associating the conditional distributions with the prominent behavioral implications of their respective associated kinemes. These results show the possibility of inferring human behavioral characteristics through the modeling of dyadic head motion entrainment.
我们的工作考察了互动夫妻的头部运动与人类专家对某些整体行为特征(例如,责备模式)的判断之间的联系。我们采用了一种数据驱动的模型,该模型以一种无监督的方式将头部运动聚类为称为运动学的基本类型。我们提出了三组基于Kullback-Leibler散度的相似性度量来模拟夹带。我们发现运动学(关节)分布的散度与目标行为特征具有一致且显著的相关性。运动学条件分布的散度可以用来预测行为特征的极性。我们通过将条件分布与其各自相关运动学的突出行为含义联系起来,部分解释了这种强相关性。这些结果显示了通过二元头部运动夹带建模来推断人类行为特征的可能性。
{"title":"Modeling head motion entrainment for prediction of couples' behavioral characteristics","authors":"Bo Xiao, P. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan","doi":"10.1109/ACII.2015.7344556","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344556","url":null,"abstract":"Our work examines the link between head motion entrainment of interacting couples and human expert's judgment on certain overall behavioral characteristics (e.g., Blame patterns). We employ a data-driven model that clusters head motion in an unsupervised manner into elementary types called kinemes. We propose three groups of similarity measures based on Kullback-Leibler divergence to model entrainment. We find that the divergence of the (joint) distribution of kinemes yields consistent and significant correlation with target behavior characteristics. The divergence of the conditional distribution of kinemes is shown to predict the polarity of the behavioral characteristics. We partly explain the strong correlations via associating the conditional distributions with the prominent behavioral implications of their respective associated kinemes. These results show the possibility of inferring human behavioral characteristics through the modeling of dyadic head motion entrainment.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"146 1","pages":"91-97"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87583652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Relevant body cues for the classification of emotional body expression in daily actions 日常行为中情绪身体表达分类的相关身体线索
Nesrine Fourati, C. Pelachaud
In the context of emotional body expression, previous works mainly focused on perceptual studies to identify the most important expressive cues. Only few studies gave insights on which body cues could be relevant for the classification and the characterization of emotions expressed in body movement. In this paper, we present our Random Forest based feature selection approach for the identification of relevant expressive body cues in the context of emotional body expression classification. We also discuss the ranking of relevant body cues according to each expressed emotion across a set of daily actions.
在情绪身体表达的背景下,以往的工作主要集中在感性研究上,以识别最重要的表达线索。只有少数研究给出了哪些身体线索可能与身体运动中表达的情绪的分类和特征相关的见解。在本文中,我们提出了一种基于随机森林的特征选择方法,用于识别情绪身体表情分类背景下的相关表达性身体线索。我们还讨论了在一系列日常行为中,根据每一种表达的情绪对相关身体线索的排序。
{"title":"Relevant body cues for the classification of emotional body expression in daily actions","authors":"Nesrine Fourati, C. Pelachaud","doi":"10.1109/ACII.2015.7344582","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344582","url":null,"abstract":"In the context of emotional body expression, previous works mainly focused on perceptual studies to identify the most important expressive cues. Only few studies gave insights on which body cues could be relevant for the classification and the characterization of emotions expressed in body movement. In this paper, we present our Random Forest based feature selection approach for the identification of relevant expressive body cues in the context of emotional body expression classification. We also discuss the ranking of relevant body cues according to each expressed emotion across a set of daily actions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"47 1","pages":"267-273"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79227362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automatic assessment and analysis of public speaking anxiety: A virtual audience case study 公共演讲焦虑的自动评估与分析:一个虚拟听众案例研究
T. Wörtwein, Louis-Philippe Morency, Stefan Scherer
Public speaking has become an integral part of many professions and is central to career building opportunities. Yet, public speaking anxiety is often referred to as the most common fear in everyday life and can hinder one's ability to speak in public severely. While virtual and real audiences have been successfully utilized to treat public speaking anxiety in the past, little work has been done on identifying behavioral characteristics of speakers suffering from anxiety. In this work, we focus on the characterization of behavioral indicators and the automatic assessment of public speaking anxiety. We identify several indicators for public speaking anxiety, among them are less eye contact with the audience, reduced variability in the voice, and more pauses. We automatically assess the public speaking anxiety as reported by the speakers through a self-assessment questionnaire using a speaker independent paradigm. Our approach using ensemble trees achieves a high correlation between ground truth and our estimation (r=0.825). Complementary to automatic measures of anxiety, we are also interested in speakers' perceptual differences when interacting with a virtual audience based on their level of anxiety in order to improve and further the development of virtual audiences for the training of public speaking and the reduction of anxiety.
公众演讲已经成为许多职业不可或缺的一部分,是职业发展机会的核心。然而,公共演讲焦虑通常被认为是日常生活中最常见的恐惧,它会严重阻碍一个人在公共场合讲话的能力。虽然过去已经成功地利用虚拟和真实观众来治疗公共演讲焦虑,但在识别患有焦虑的演讲者的行为特征方面做的工作很少。在这项工作中,我们重点研究了行为指标的特征和公共演讲焦虑的自动评估。我们确定了公共演讲焦虑的几个指标,其中包括与听众的目光接触减少,声音的变化减少以及更多的停顿。我们采用演讲者独立范式,通过自评问卷对演讲者报告的演讲焦虑进行自动评估。我们使用集合树的方法实现了基础真值与我们的估计之间的高度相关性(r=0.825)。作为焦虑自动测量的补充,我们也对演讲者在与虚拟听众互动时基于他们的焦虑水平的感知差异感兴趣,以改善和进一步发展虚拟听众,用于公共演讲培训和减少焦虑。
{"title":"Automatic assessment and analysis of public speaking anxiety: A virtual audience case study","authors":"T. Wörtwein, Louis-Philippe Morency, Stefan Scherer","doi":"10.1109/ACII.2015.7344570","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344570","url":null,"abstract":"Public speaking has become an integral part of many professions and is central to career building opportunities. Yet, public speaking anxiety is often referred to as the most common fear in everyday life and can hinder one's ability to speak in public severely. While virtual and real audiences have been successfully utilized to treat public speaking anxiety in the past, little work has been done on identifying behavioral characteristics of speakers suffering from anxiety. In this work, we focus on the characterization of behavioral indicators and the automatic assessment of public speaking anxiety. We identify several indicators for public speaking anxiety, among them are less eye contact with the audience, reduced variability in the voice, and more pauses. We automatically assess the public speaking anxiety as reported by the speakers through a self-assessment questionnaire using a speaker independent paradigm. Our approach using ensemble trees achieves a high correlation between ground truth and our estimation (r=0.825). Complementary to automatic measures of anxiety, we are also interested in speakers' perceptual differences when interacting with a virtual audience based on their level of anxiety in order to improve and further the development of virtual audiences for the training of public speaking and the reduction of anxiety.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"77 1","pages":"187-193"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74804240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Engagement detection based on mutli-party cues for human robot interaction 基于多方线索的人机交互交战检测
Hanan Salam, M. Chetouani
In this paper, we address the problematic of automatic detection of engagement in multi-party Human-Robot Interaction scenarios. The aim is to investigate to what extent are we able to infer the engagement of one of the entities of a group based solely on the cues of the other entities present in the interaction. In a scenario featuring 3 entities: 2 participants and a robot, we extract behavioural cues that concern each of the entities, we then build models based solely on each of these entities' cues and on combinations of them to predict the engagement level of each of the participants. Person-level cross validation shows that we are capable of detecting the engagement of the participant in question using solely the behavioural cues of the robot with a high accuracy compared to using the participant's cues himself (75.91% vs. 74.32%). Moreover using the behavioural cues of the other participant is also informative where it permits the detection of the engagement of the participant in question at an accuracy of 62.15% on average. The correlation between the features of the other participant with the engagement labels of the participant in question suggests a high cohesion between the two participants. In addition, the similarity of the most significantly correlated features among the two participants suggests a high synchrony between the two parties.
在本文中,我们解决了人机交互场景中参与性的自动检测问题。目的是调查我们在多大程度上能够仅根据互动中存在的其他实体的线索推断群体中一个实体的参与。在一个具有3个实体的场景中:2个参与者和一个机器人,我们提取与每个实体相关的行为线索,然后我们仅基于这些实体的线索和它们的组合建立模型,以预测每个参与者的参与水平。个人层面的交叉验证表明,与使用参与者自己的线索相比,我们能够仅使用机器人的行为线索以更高的准确率检测参与者的参与(75.91%对74.32%)。此外,使用其他参与者的行为线索也可以提供信息,因为它允许以平均62.15%的准确率检测参与者的参与情况。另一个参与者的特征与参与者的参与标签之间的相关性表明,两个参与者之间存在高度的凝聚力。此外,两个参与者之间最显著相关特征的相似性表明双方之间的高度同步性。
{"title":"Engagement detection based on mutli-party cues for human robot interaction","authors":"Hanan Salam, M. Chetouani","doi":"10.1109/ACII.2015.7344593","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344593","url":null,"abstract":"In this paper, we address the problematic of automatic detection of engagement in multi-party Human-Robot Interaction scenarios. The aim is to investigate to what extent are we able to infer the engagement of one of the entities of a group based solely on the cues of the other entities present in the interaction. In a scenario featuring 3 entities: 2 participants and a robot, we extract behavioural cues that concern each of the entities, we then build models based solely on each of these entities' cues and on combinations of them to predict the engagement level of each of the participants. Person-level cross validation shows that we are capable of detecting the engagement of the participant in question using solely the behavioural cues of the robot with a high accuracy compared to using the participant's cues himself (75.91% vs. 74.32%). Moreover using the behavioural cues of the other participant is also informative where it permits the detection of the engagement of the participant in question at an accuracy of 62.15% on average. The correlation between the features of the other participant with the engagement labels of the participant in question suggests a high cohesion between the two participants. In addition, the similarity of the most significantly correlated features among the two participants suggests a high synchrony between the two parties.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"71 1","pages":"341-347"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76772461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1