首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
Gestural and Postural Reactions to Stressful Event: Design of a Haptic Stressful Stimulus 应激事件的手势和姿势反应:触觉应激刺激的设计
Yoren Gaffary, David Antonio Gómez Jáuregui, Jean-Claude Martin, M. Ammi
Previous studies about kinesthetic expressions of emotions are mainly based on acted expressions of affective states, which might be quite different from spontaneous expressions. In a previous study, we proposed a task to collect haptic expressions of a spontaneous stress. In this paper, we explore the effectiveness of this task to induce a spontaneous stress in two ways: a subjective feedback, and a more objective approach-avoidance behavior.
以往关于动觉情绪表达的研究主要是基于情感状态的行为表达,这可能与自发表达有很大的不同。在之前的研究中,我们提出了一个收集自发应激的触觉表达的任务。在本文中,我们探讨了这一任务的有效性,以诱导自发应激两种方式:主观反馈和更客观的方法-回避行为。
{"title":"Gestural and Postural Reactions to Stressful Event: Design of a Haptic Stressful Stimulus","authors":"Yoren Gaffary, David Antonio Gómez Jáuregui, Jean-Claude Martin, M. Ammi","doi":"10.1109/ACII.2015.7344696","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344696","url":null,"abstract":"Previous studies about kinesthetic expressions of emotions are mainly based on acted expressions of affective states, which might be quite different from spontaneous expressions. In a previous study, we proposed a task to collect haptic expressions of a spontaneous stress. In this paper, we explore the effectiveness of this task to induce a spontaneous stress in two ways: a subjective feedback, and a more objective approach-avoidance behavior.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"116 1","pages":"988-992"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76872659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adapting sentiment analysis to face-to-face human-agent interactions: From the detection to the evaluation issues 将情感分析应用于面对面的人机交互:从检测到评估问题
Caroline Langlet, C. Clavel
This paper introduces a sentiment analysis method suitable to the human-agent and face-to-face interactions. We present the positioning of our system and its evaluation protocol according to the existing sentiment analysis literature and detail how the proposed system integrates the human-agent interaction issues. Finally, we provide an in-depth analysis of the results obtained by the evaluation, opening the discussion on the different difficulties and the remaining challenges of sentiment analysis in human-agent interactions.
本文介绍了一种适用于人机交互和面对面交互的情感分析方法。根据现有的情感分析文献,我们提出了系统的定位及其评估协议,并详细介绍了所提出的系统如何集成人机交互问题。最后,我们对评估获得的结果进行了深入分析,开始讨论人类智能体交互中情感分析的不同困难和仍然存在的挑战。
{"title":"Adapting sentiment analysis to face-to-face human-agent interactions: From the detection to the evaluation issues","authors":"Caroline Langlet, C. Clavel","doi":"10.1109/ACII.2015.7344545","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344545","url":null,"abstract":"This paper introduces a sentiment analysis method suitable to the human-agent and face-to-face interactions. We present the positioning of our system and its evaluation protocol according to the existing sentiment analysis literature and detail how the proposed system integrates the human-agent interaction issues. Finally, we provide an in-depth analysis of the results obtained by the evaluation, opening the discussion on the different difficulties and the remaining challenges of sentiment analysis in human-agent interactions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"12 1","pages":"14-20"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81535062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Cross-corpus analysis for acoustic recognition of negative interactions 负交互声识别的跨语料库分析
I. Lefter, H. Nefs, C. Jonker, L. Rothkrantz
Recent years have witnessed a growing interest in recognizing emotions and events based on speech. One of the applications of such systems is automatically detecting when a situations gets out of hand and human intervention is needed. Most studies have focused on increasing recognition accuracies using parts of the same dataset for training and testing. However, this says little about how such a trained system is expected to perform `in the wild'. In this paper we present a cross-corpus study using the audio part of three multimodal datasets containing negative human-human interactions. We present intra- and cross-corpus accuracies whilst manipulating the acoustic features, normalization schemes, and oversampling of the least represented class to alleviate the negative effects of data unbalance. We observe a decrease in performance when disjunct corpora are used for training and testing. Merging two datasets for training results in a slightly lower performance than the best one obtained by using only one corpus for training. A hand crafted low dimensional feature set shows competitive behavior when compared to a brute force high dimensional features vector. Corpus normalization and artificially creating samples of the sparsest class have a positive effect.
近年来,人们对基于言语的情感和事件识别越来越感兴趣。这种系统的应用之一是自动检测情况何时失控,何时需要人工干预。大多数研究都集中在使用相同数据集的部分进行训练和测试来提高识别准确性。然而,这并不能说明这样一个训练有素的系统在“野外”的表现。在本文中,我们提出了一个跨语料库研究,使用三个多模态数据集的音频部分,这些数据集包含负面的人与人之间的互动。我们提出了内部和跨语料库的准确性,同时操纵声学特征,规范化方案和最少代表类的过采样,以减轻数据不平衡的负面影响。我们观察到,当使用分离语料库进行训练和测试时,性能会下降。合并两个数据集进行训练的结果比仅使用一个语料库进行训练获得的最佳性能略低。与蛮力高维特征向量相比,手工制作的低维特征集显示出竞争行为。语料库规范化和人为地创建最稀疏类的样本具有积极的效果。
{"title":"Cross-corpus analysis for acoustic recognition of negative interactions","authors":"I. Lefter, H. Nefs, C. Jonker, L. Rothkrantz","doi":"10.1109/ACII.2015.7344562","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344562","url":null,"abstract":"Recent years have witnessed a growing interest in recognizing emotions and events based on speech. One of the applications of such systems is automatically detecting when a situations gets out of hand and human intervention is needed. Most studies have focused on increasing recognition accuracies using parts of the same dataset for training and testing. However, this says little about how such a trained system is expected to perform `in the wild'. In this paper we present a cross-corpus study using the audio part of three multimodal datasets containing negative human-human interactions. We present intra- and cross-corpus accuracies whilst manipulating the acoustic features, normalization schemes, and oversampling of the least represented class to alleviate the negative effects of data unbalance. We observe a decrease in performance when disjunct corpora are used for training and testing. Merging two datasets for training results in a slightly lower performance than the best one obtained by using only one corpus for training. A hand crafted low dimensional feature set shows competitive behavior when compared to a brute force high dimensional features vector. Corpus normalization and artificially creating samples of the sparsest class have a positive effect.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"48 1","pages":"132-138"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78730588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
From simulated speech to natural speech, what are the robust features for emotion recognition? 从模拟语音到自然语音,情感识别的鲁棒性特征是什么?
Ya Li, Linlin Chao, Yazhu Liu, Wei Bao, J. Tao
The earliest research on emotion recognition starts with simulated/acted stereotypical emotional corpus, and then extends to elicited corpus. Recently, the demanding for real application forces the research shift to natural and spontaneous corpus. Previous research shows that accuracies of emotion recognition are gradual decline from simulated speech, to elicited and totally natural speech. This paper aims to investigate the effects of the common utilized spectral, prosody and voice quality features in emotion recognition with the three types of corpus, and finds out the robust feature for emotion recognition with natural speech. Emotion recognition by several common machine learning methods are carried out and thoroughly compared. Three feature selection methods are performed to find the robust features. The results on six common used corpora confirm that recognition accuracies decrease when the corpus changing from simulated to natural corpus. In addition, prosody and voice quality features are robust for emotion recognition on simulated corpus, while spectral feature is robust in elicited and natural corpus.
情绪识别的研究最早是从模拟/行为的刻板印象情绪语料库开始的,然后扩展到引出语料库。近年来,实际应用的需求迫使研究转向自然自发语料库。以往的研究表明,情绪识别的准确性从模拟语音到引出的完全自然的语音是逐渐下降的。本文旨在利用这三种语料库研究常用的频谱、韵律和音质特征在情感识别中的作用,找出用于自然语音情感识别的鲁棒性特征。对几种常用的机器学习方法进行了情感识别,并进行了比较。采用三种特征选择方法寻找鲁棒特征。在6种常用语料库上的实验结果表明,当语料库由模拟语料库转换为自然语料库时,识别准确率下降。此外,韵律和语音质量特征对模拟语料库的情感识别具有鲁棒性,而谱特征对自然语料库和人工语料库的情感识别具有鲁棒性。
{"title":"From simulated speech to natural speech, what are the robust features for emotion recognition?","authors":"Ya Li, Linlin Chao, Yazhu Liu, Wei Bao, J. Tao","doi":"10.1109/ACII.2015.7344597","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344597","url":null,"abstract":"The earliest research on emotion recognition starts with simulated/acted stereotypical emotional corpus, and then extends to elicited corpus. Recently, the demanding for real application forces the research shift to natural and spontaneous corpus. Previous research shows that accuracies of emotion recognition are gradual decline from simulated speech, to elicited and totally natural speech. This paper aims to investigate the effects of the common utilized spectral, prosody and voice quality features in emotion recognition with the three types of corpus, and finds out the robust feature for emotion recognition with natural speech. Emotion recognition by several common machine learning methods are carried out and thoroughly compared. Three feature selection methods are performed to find the robust features. The results on six common used corpora confirm that recognition accuracies decrease when the corpus changing from simulated to natural corpus. In addition, prosody and voice quality features are robust for emotion recognition on simulated corpus, while spectral feature is robust in elicited and natural corpus.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"64 1","pages":"368-373"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79580455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Cognitive state measurement from eye gaze analysis in an intelligent virtual reality driving system for autism intervention 自闭症干预智能虚拟现实驾驶系统中眼注视分析的认知状态测量
Lian Zhang, Joshua W. Wade, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar
Autism Spectrum Disorder (ASD) is a group of neurodevelopmental disabilities with a high prevalence rate. While much research has focused on improving social communication deficits in ASD populations, less emphasis has been devoted to improving skills relevant for adult independent living, such as driving. In this paper, a novel virtual reality (VR)-based driving system with different difficulty levels of tasks is presented to train and improve driving skills of teenagers with ASD. The goal of this paper is to measure the cognitive load experienced by an individual with ASD while he is driving in the VR-based driving system. Several eye gaze features are identified that varied with cognitive load in an experiment participated by 12 teenagers with ASD. Several machine learning methods were compared and the ability of these methods to accurately measure cognitive load was validated with respect to the subjective rating of a therapist. Results will be used to build models in an intelligent VR-based driving system that can sense a participant's real-time cognitive load and offer driving tasks at an appropriate difficulty level in order to maximize the participant's long-term performance.
自闭症谱系障碍(Autism Spectrum Disorder, ASD)是一组患病率较高的神经发育障碍。虽然很多研究都集中在改善ASD人群的社会沟通缺陷上,但很少有人重视提高与成人独立生活相关的技能,比如驾驶。本文提出了一种新的基于虚拟现实(VR)的驾驶系统,通过不同难度的任务来训练和提高青少年ASD的驾驶技能。本文的目的是测量ASD患者在基于vr的驾驶系统中驾驶时所经历的认知负荷。在一项由12名自闭症青少年参与的实验中,发现了几种眼睛注视特征随着认知负荷的变化而变化。比较了几种机器学习方法,并根据治疗师的主观评分验证了这些方法准确测量认知负荷的能力。研究结果将用于在基于vr的智能驾驶系统中建立模型,该系统可以感知参与者的实时认知负荷,并提供适当难度的驾驶任务,以最大限度地提高参与者的长期表现。
{"title":"Cognitive state measurement from eye gaze analysis in an intelligent virtual reality driving system for autism intervention","authors":"Lian Zhang, Joshua W. Wade, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar","doi":"10.1109/ACII.2015.7344621","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344621","url":null,"abstract":"Autism Spectrum Disorder (ASD) is a group of neurodevelopmental disabilities with a high prevalence rate. While much research has focused on improving social communication deficits in ASD populations, less emphasis has been devoted to improving skills relevant for adult independent living, such as driving. In this paper, a novel virtual reality (VR)-based driving system with different difficulty levels of tasks is presented to train and improve driving skills of teenagers with ASD. The goal of this paper is to measure the cognitive load experienced by an individual with ASD while he is driving in the VR-based driving system. Several eye gaze features are identified that varied with cognitive load in an experiment participated by 12 teenagers with ASD. Several machine learning methods were compared and the ability of these methods to accurately measure cognitive load was validated with respect to the subjective rating of a therapist. Results will be used to build models in an intelligent VR-based driving system that can sense a participant's real-time cognitive load and offer driving tasks at an appropriate difficulty level in order to maximize the participant's long-term performance.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"51 1","pages":"532-538"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79395135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Framework for combination aware AU intensity recognition 组合感知AU强度识别框架
Isabel Gonzalez, W. Verhelst, Meshia Cédric Oveneke, H. Sahli, D. Jiang
We present a framework for combination aware AU intensity recognition. It includes a feature extraction approach that can handle small head movements which does not require face alignment. A three layered structure is used for the AU classification. The first layer is dedicated to independent AU recognition, and the second layer incorporates AU combination knowledge. At a third layer, AU dynamics are handled based on variable duration semi-Markov model. The first two layers are modeled using extreme learning machines (ELMs). ELMs have equal performance to support vector machines but are computationally more efficient, and can handle multi-class classification directly. Moreover, they include feature selection via manifold regularization. We show that the proposed layered classification scheme can improve results by considering AU combinations as well as intensity recognition.
提出了一种组合感知AU强度识别框架。它包括一种特征提取方法,可以处理不需要面部对齐的小头部运动。AU分类采用三层结构。第一层用于独立AU识别,第二层包含AU组合知识。在第三层,基于可变持续时间半马尔可夫模型处理AU动态。前两层使用极限学习机(elm)建模。elm具有与支持向量机相当的性能,但计算效率更高,并且可以直接处理多类分类。此外,它们还包括通过流形正则化进行特征选择。我们表明,通过考虑AU组合和强度识别,提出的分层分类方案可以改善结果。
{"title":"Framework for combination aware AU intensity recognition","authors":"Isabel Gonzalez, W. Verhelst, Meshia Cédric Oveneke, H. Sahli, D. Jiang","doi":"10.1109/ACII.2015.7344631","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344631","url":null,"abstract":"We present a framework for combination aware AU intensity recognition. It includes a feature extraction approach that can handle small head movements which does not require face alignment. A three layered structure is used for the AU classification. The first layer is dedicated to independent AU recognition, and the second layer incorporates AU combination knowledge. At a third layer, AU dynamics are handled based on variable duration semi-Markov model. The first two layers are modeled using extreme learning machines (ELMs). ELMs have equal performance to support vector machines but are computationally more efficient, and can handle multi-class classification directly. Moreover, they include feature selection via manifold regularization. We show that the proposed layered classification scheme can improve results by considering AU combinations as well as intensity recognition.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"602-608"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77729160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A warm touch of affect? 一种温暖的爱抚?
Christian J. A. M. Willemse
One of the research areas within affective Computer Mediated Communication currently under investigation is that of mediated social touch. A social touch is a complex composition of different physical parameters that can be simulated by haptic technologies. In this article we argue why we think it makes sense to incorporate warmth - and in particular simulations of one's body heat - in mediated communication devices; that is, physical warmth affects perceptions of social warmth, and our skin temperature can be considered a display of our socio-emotional state. Moreover, we outline specific research domains for the current PhD project.
目前正在研究的情感计算机中介通信的研究领域之一是中介社会接触。社交触摸是不同物理参数的复杂组合,可以通过触觉技术来模拟。在这篇文章中,我们讨论了为什么我们认为在中介通信设备中加入温度是有意义的,特别是对人体热量的模拟;也就是说,身体温暖会影响对社会温暖的感知,而我们的皮肤温度可以被认为是我们社会情绪状态的一种表现。此外,我们概述了当前博士项目的具体研究领域。
{"title":"A warm touch of affect?","authors":"Christian J. A. M. Willemse","doi":"10.1109/ACII.2015.7344656","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344656","url":null,"abstract":"One of the research areas within affective Computer Mediated Communication currently under investigation is that of mediated social touch. A social touch is a complex composition of different physical parameters that can be simulated by haptic technologies. In this article we argue why we think it makes sense to incorporate warmth - and in particular simulations of one's body heat - in mediated communication devices; that is, physical warmth affects perceptions of social warmth, and our skin temperature can be considered a display of our socio-emotional state. Moreover, we outline specific research domains for the current PhD project.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"27 1","pages":"766-771"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82647475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A linear regression model to detect user emotion for touch input interactive systems 基于线性回归模型的触摸输入交互系统用户情感检测
S. Bhattacharya
Human emotion plays significant role is affecting our reasoning, learning, cognition and decision making, which in turn may affect usability of interactive systems. Detection of emotion of interactive system users is therefore important, as it can help design for improved user experience. In this work, we propose a model to detect the emotional state of the users of touch screen devices. Although a number of methods were developed to detect human emotion, those are computationally intensive and require setup cost. The model we propose aims to avoid these limitations and make the detection process viable for mobile platforms. We assume three emotional states of a user: positive, negative and neutral. The touch interaction is characterized by a set of seven features, derived from the finger strokes and taps. Our proposed model is a linear combination of these features. The model is developed and validated with empirical data involving 57 participants performing 7 touch input tasks. The validation study demonstrates a high prediction accuracy of 90.47%.
人类的情感影响着我们的推理、学习、认知和决策,进而影响交互系统的可用性。因此,检测交互系统用户的情感是很重要的,因为它可以帮助设计改进用户体验。在这项工作中,我们提出了一个模型来检测触摸屏设备用户的情绪状态。虽然已经开发了许多方法来检测人类的情绪,但这些方法都是计算密集型的,并且需要设置成本。我们提出的模型旨在避免这些限制,并使检测过程适用于移动平台。我们假设用户有三种情绪状态:积极、消极和中性。触摸交互的特点是一组七个特征,源自手指的抚摸和轻拍。我们提出的模型是这些特征的线性组合。该模型通过57名参与者执行7项触摸输入任务的经验数据进行了开发和验证。验证研究表明,该方法的预测准确率为90.47%。
{"title":"A linear regression model to detect user emotion for touch input interactive systems","authors":"S. Bhattacharya","doi":"10.1109/ACII.2015.7344693","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344693","url":null,"abstract":"Human emotion plays significant role is affecting our reasoning, learning, cognition and decision making, which in turn may affect usability of interactive systems. Detection of emotion of interactive system users is therefore important, as it can help design for improved user experience. In this work, we propose a model to detect the emotional state of the users of touch screen devices. Although a number of methods were developed to detect human emotion, those are computationally intensive and require setup cost. The model we propose aims to avoid these limitations and make the detection process viable for mobile platforms. We assume three emotional states of a user: positive, negative and neutral. The touch interaction is characterized by a set of seven features, derived from the finger strokes and taps. Our proposed model is a linear combination of these features. The model is developed and validated with empirical data involving 57 participants performing 7 touch input tasks. The validation study demonstrates a high prediction accuracy of 90.47%.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"51 1","pages":"970-975"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83189095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Building autonomous sensitive artificial listeners (Extended abstract) 构建自主敏感的人工听者(扩展摘要)
M. Schröder, Elisabetta Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. Maat, G. McKeown, Sathish Pammi, M. Pantic, C. Pelachaud, Björn Schuller, E. D. Sevin, M. Valstar, M. Wöllmer
This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving and producing the emotional and non-verbal behaviours required to sustain a conversational dialogue. We present the Sensitive Artificial Listener (SAL) scenario as a setting which seems particularly suited for the study of emotional and non-verbal behaviour, since it requires only very limited verbal understanding on the part of the machine. This scenario allows us to concentrate on non-verbal capabilities without having to address at the same time the challenges of spoken language understanding, task modeling etc. We first summarise three prototype versions of the SAL scenario, in which the behaviour of the Sensitive Artificial Listener characters was determined by a human operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to collect data required for building system components for analysing and synthesising the respective behaviours. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behaviour, dialogue management, and synthesis of speaker and listener behaviour of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-type systems. Since the system is designed for modularity and reuse, and since it is publicly available, the SAL system has potential as a joint research tool in the affective computing research community.
本文描述了建立一个关注情感和非语言交互能力的实时交互多模态对话系统的大量工作。这项工作的动机是为技术提供感知和产生维持会话对话所需的情感和非语言行为的能力。我们提出了敏感人工听者(SAL)场景,它似乎特别适合于研究情感和非语言行为,因为它只需要机器非常有限的语言理解。这种情况使我们能够专注于非语言能力,而不必同时解决口语理解、任务建模等方面的挑战。我们首先总结了SAL场景的三个原型版本,其中敏感人工倾听者角色的行为由人类操作员决定。这些原型用于验证SAL场景的有效性,并允许我们收集构建系统组件所需的数据,以分析和综合各自的行为。然后,我们描述了我们创建的完全自主集成实时系统,该系统结合了用户行为的增量分析、对话管理以及作为虚拟代理显示的SAL角色的说话者和听者行为的综合。我们讨论的原则,应基础的评价saltype系统。由于该系统是为模块化和重用而设计的,并且它是公开可用的,因此SAL系统具有作为情感计算研究社区的联合研究工具的潜力。
{"title":"Building autonomous sensitive artificial listeners (Extended abstract)","authors":"M. Schröder, Elisabetta Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. Maat, G. McKeown, Sathish Pammi, M. Pantic, C. Pelachaud, Björn Schuller, E. D. Sevin, M. Valstar, M. Wöllmer","doi":"10.1109/ACII.2015.7344610","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344610","url":null,"abstract":"This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving and producing the emotional and non-verbal behaviours required to sustain a conversational dialogue. We present the Sensitive Artificial Listener (SAL) scenario as a setting which seems particularly suited for the study of emotional and non-verbal behaviour, since it requires only very limited verbal understanding on the part of the machine. This scenario allows us to concentrate on non-verbal capabilities without having to address at the same time the challenges of spoken language understanding, task modeling etc. We first summarise three prototype versions of the SAL scenario, in which the behaviour of the Sensitive Artificial Listener characters was determined by a human operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to collect data required for building system components for analysing and synthesising the respective behaviours. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behaviour, dialogue management, and synthesis of speaker and listener behaviour of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-type systems. Since the system is designed for modularity and reuse, and since it is publicly available, the SAL system has potential as a joint research tool in the affective computing research community.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"282 1","pages":"456-462"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88057551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-pose facial expression recognition based on SURF boosting 基于SURF增强的多姿态面部表情识别
Qiyu Rao, Xingtai Qu, Qi-rong Mao, Yongzhao Zhan
Today Human Computer Interaction (HCI) is one of the most important topics in machine vision and image processing fields. The ability to handle multi-pose facial expressions is important for computers to understand affective behavior under less constrained environment. In this paper, we propose a SURF (Speeded-Up Robust Features) boosting framework to address challenging issues in multi-pose facial expression recognition (FER). Local SURF features from different overlapping patches are selected by boosting in our model to focus on more discriminable representations of facial expression. And this paper proposes a novel training step during boosting. The experiments using the proposed method demonstrate favorable results on RaFD and KDEF databases.
人机交互(HCI)是当今机器视觉和图像处理领域最重要的课题之一。处理多姿态面部表情的能力对于计算机理解较少约束环境下的情感行为非常重要。在本文中,我们提出了一个SURF(加速鲁棒特征)增强框架来解决多姿态面部表情识别(FER)中的挑战性问题。在我们的模型中,通过增强从不同重叠斑块中选择局部SURF特征,以专注于更可区分的面部表情表示。提出了一种新的提升过程中的训练步骤。实验表明,该方法在RaFD和KDEF数据库上取得了良好的效果。
{"title":"Multi-pose facial expression recognition based on SURF boosting","authors":"Qiyu Rao, Xingtai Qu, Qi-rong Mao, Yongzhao Zhan","doi":"10.1109/ACII.2015.7344635","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344635","url":null,"abstract":"Today Human Computer Interaction (HCI) is one of the most important topics in machine vision and image processing fields. The ability to handle multi-pose facial expressions is important for computers to understand affective behavior under less constrained environment. In this paper, we propose a SURF (Speeded-Up Robust Features) boosting framework to address challenging issues in multi-pose facial expression recognition (FER). Local SURF features from different overlapping patches are selected by boosting in our model to focus on more discriminable representations of facial expression. And this paper proposes a novel training step during boosting. The experiments using the proposed method demonstrate favorable results on RaFD and KDEF databases.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"84 9 1","pages":"630-635"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86207309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1