首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
Avatar and participant gender differences in the perception of uncanniness of virtual humans 虚拟角色和参与者对虚拟人类的不确定性感知的性别差异
Jacqueline D. Bailey
The widespread use of avatars in training & simulation has expanded from entertainers to filling more serious roles. This change has emerged from the need to develop cost-effective & customizable avatars for interaction with trainees. While the use of avatars continues to expand, issues surrounding the impact of individual trainee factors on training outcomes, & how the design implications for avatars presented may interact with these factors, is not fully understood. Also, the uncanny valley has yet to be resolved, which may impair users' perception & acceptance of avatars & associated training scenarios. Gender has emerged as an important consideration when designing avatars, both in terms of gender differences in trainee perceptions, & the impact of avatars gender on these perceptions & experiences. The startle response of participants is measured to determine the participants' affective response to how pleasant the avatar is perceived, to ensure positive training outcomes.
在训练和模拟中广泛使用的化身已经从娱乐人员扩展到填补更严肃的角色。这种变化是由于需要开发具有成本效益和可定制的化身,以便与受训者互动。虽然虚拟形象的使用在不断扩大,但围绕个人学员因素对培训结果的影响的问题,以及虚拟形象的设计含义如何与这些因素相互作用,还没有完全理解。此外,恐怖谷还有待解决,这可能会影响用户对虚拟形象和相关训练场景的感知和接受。在设计虚拟角色时,性别已经成为一个重要的考虑因素,无论是在受训者感知的性别差异方面,还是在虚拟角色性别对这些感知和体验的影响方面。参与者的惊吓反应被测量,以确定参与者对虚拟形象的愉悦程度的情感反应,以确保积极的训练结果。
{"title":"Avatar and participant gender differences in the perception of uncanniness of virtual humans","authors":"Jacqueline D. Bailey","doi":"10.1109/ACII.2017.8273657","DOIUrl":"https://doi.org/10.1109/ACII.2017.8273657","url":null,"abstract":"The widespread use of avatars in training & simulation has expanded from entertainers to filling more serious roles. This change has emerged from the need to develop cost-effective & customizable avatars for interaction with trainees. While the use of avatars continues to expand, issues surrounding the impact of individual trainee factors on training outcomes, & how the design implications for avatars presented may interact with these factors, is not fully understood. Also, the uncanny valley has yet to be resolved, which may impair users' perception & acceptance of avatars & associated training scenarios. Gender has emerged as an important consideration when designing avatars, both in terms of gender differences in trainee perceptions, & the impact of avatars gender on these perceptions & experiences. The startle response of participants is measured to determine the participants' affective response to how pleasant the avatar is perceived, to ensure positive training outcomes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"88 1","pages":"571-575"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86969277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Neural conditional ordinal random fields for agreement level estimation 协议水平估计的神经条件有序随机场
Nemanja Rakicevic, Ognjen Rudovic, Stavros Petridis, M. Pantic
We present a novel approach to automated estimation of agreement intensity levels from facial images. To this end, we employ the MAHNOB Mimicry database of subjects recorded during dyadic interactions, where the facial images are annotated in terms of agreement intensity levels using the Likert scale (strong disagreement, disagreement, neutral, agreement and strong agreement). Dynamic modelling of the agreement levels is accomplished by means of a Conditional Ordinal Random Field model. Specifically, we propose a novel Neural Conditional Ordinal Random Field model that performs non-linear feature extraction from face images using the notion of Neural Networks, while also modelling temporal and ordinal relationships between the agreement levels. We show in our experiments that the proposed approach outperforms existing methods for modelling of sequential data. The preliminary results obtained on five subjects demonstrate that the intensity of agreement can successfully be estimated from facial images (39% F1 score) using the proposed method.
我们提出了一种新的方法来自动估计面部图像的一致强度水平。为此,我们使用MAHNOB Mimicry数据库记录了在二元交互过程中的受试者,其中面部图像使用李克特量表(强烈不同意,不同意,中性,同意和强烈同意)根据同意强度水平进行注释。通过条件有序随机场模型实现了协议层次的动态建模。具体来说,我们提出了一种新的神经条件有序随机场模型,该模型使用神经网络的概念从人脸图像中执行非线性特征提取,同时还建模了协议级别之间的时间和顺序关系。我们在实验中表明,所提出的方法优于现有的序列数据建模方法。在5个实验对象上获得的初步结果表明,使用该方法可以成功地从面部图像中估计出一致性的强度(39% F1得分)。
{"title":"Neural conditional ordinal random fields for agreement level estimation","authors":"Nemanja Rakicevic, Ognjen Rudovic, Stavros Petridis, M. Pantic","doi":"10.1109/ICPR.2016.7899967","DOIUrl":"https://doi.org/10.1109/ICPR.2016.7899967","url":null,"abstract":"We present a novel approach to automated estimation of agreement intensity levels from facial images. To this end, we employ the MAHNOB Mimicry database of subjects recorded during dyadic interactions, where the facial images are annotated in terms of agreement intensity levels using the Likert scale (strong disagreement, disagreement, neutral, agreement and strong agreement). Dynamic modelling of the agreement levels is accomplished by means of a Conditional Ordinal Random Field model. Specifically, we propose a novel Neural Conditional Ordinal Random Field model that performs non-linear feature extraction from face images using the notion of Neural Networks, while also modelling temporal and ordinal relationships between the agreement levels. We show in our experiments that the proposed approach outperforms existing methods for modelling of sequential data. The preliminary results obtained on five subjects demonstrate that the intensity of agreement can successfully be estimated from facial images (39% F1 score) using the proposed method.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"24 1","pages":"885-890"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82820464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A data-driven validation of frontal EEG asymmetry using a consumer device 使用消费者设备对额叶脑电图不对称进行数据驱动验证
D. Friedman, Shai Shapira, L. Jacobson, M. Gruberger
Affective computing requires a reliable method to obtain real time information regarding affective state, and one of the promising avenues is via electroencephalography (EEG). We have performed a study intended to test whether a low cost EEG device targeted at consumers can be used to measure extreme emotional valence. One of the most studied frameworks related to the way affect is reflected in EEG is based on frontal hemispheric asymmetry. Our results indicate that a simple replication of the methods derived from this hypothesis might not be sufficient. However, using a data-driven approach based on feature engineering and machine learning, we describe a method that can reliably measure valence with the EPOC device. We discuss our study in the context of the theoretical and empirical background for frontal asymmetry.
情感计算需要一种可靠的方法来获取有关情感状态的实时信息,而通过脑电图(EEG)是一种有前途的途径。我们进行了一项研究,旨在测试针对消费者的低成本EEG设备是否可以用于测量极端情绪效价。关于影响在脑电图中反映的方式,研究最多的框架之一是基于额半球不对称的。我们的结果表明,从这一假设中得出的方法的简单复制可能是不够的。然而,使用基于特征工程和机器学习的数据驱动方法,我们描述了一种可以可靠地测量EPOC设备价的方法。我们在正面不对称的理论和实证背景下讨论我们的研究。
{"title":"A data-driven validation of frontal EEG asymmetry using a consumer device","authors":"D. Friedman, Shai Shapira, L. Jacobson, M. Gruberger","doi":"10.1109/ACII.2015.7344686","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344686","url":null,"abstract":"Affective computing requires a reliable method to obtain real time information regarding affective state, and one of the promising avenues is via electroencephalography (EEG). We have performed a study intended to test whether a low cost EEG device targeted at consumers can be used to measure extreme emotional valence. One of the most studied frameworks related to the way affect is reflected in EEG is based on frontal hemispheric asymmetry. Our results indicate that a simple replication of the methods derived from this hypothesis might not be sufficient. However, using a data-driven approach based on feature engineering and machine learning, we describe a method that can reliably measure valence with the EPOC device. We discuss our study in the context of the theoretical and empirical background for frontal asymmetry.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"930-937"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75186133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Dynamic time warping: A single dry electrode EEG study in a self-paced learning task 动态时间扭曲:自定节奏学习任务的单干电极脑电图研究
T. Yamauchi, Kunchen Xiao, Casady Bowman, A. Mueen
This study investigates dynamic time warping (DTW) as a possible analysis method for EEG-based affective computing in a self-paced learning task in which inter- and intra-personal differences are large. In one experiment, participants (N=200) carried out an implicit category learning task where their frontal EEG signals were collected throughout the experiment. Using DTW, we measured the dissimilarity distances of EEG signals between participants and examined the extent to which a k-Nearest Neighbors algorithm could predict self-rated feelings of a participant from signals taken from other participants (between-participants prediction). Results showed that DTW provides potentially useful characteristics for EEG data analysis in a heterogeneous setting. In particular, theory-based segmentation of time-series data were particularly useful for DTW analysis while smoothing and standardization were detrimental when applied in a self-paced learning task.
本研究探讨动态时间翘曲(DTW)作为一种可能的分析方法,用于基于脑电图的情感计算在个人间和个人内差异较大的自定进度学习任务中。在其中一项实验中,200名参与者进行了内隐类别学习任务,在整个实验过程中收集了他们的额叶脑电图信号。使用DTW,我们测量了参与者之间脑电图信号的不相似距离,并检验了k近邻算法可以从其他参与者的信号中预测参与者的自评感受的程度(参与者之间预测)。结果表明,DTW为异构环境下的EEG数据分析提供了潜在的有用特征。特别是,基于理论的时间序列数据分割对DTW分析特别有用,而平滑和标准化在应用于自定进度学习任务时是有害的。
{"title":"Dynamic time warping: A single dry electrode EEG study in a self-paced learning task","authors":"T. Yamauchi, Kunchen Xiao, Casady Bowman, A. Mueen","doi":"10.1109/ACII.2015.7344551","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344551","url":null,"abstract":"This study investigates dynamic time warping (DTW) as a possible analysis method for EEG-based affective computing in a self-paced learning task in which inter- and intra-personal differences are large. In one experiment, participants (N=200) carried out an implicit category learning task where their frontal EEG signals were collected throughout the experiment. Using DTW, we measured the dissimilarity distances of EEG signals between participants and examined the extent to which a k-Nearest Neighbors algorithm could predict self-rated feelings of a participant from signals taken from other participants (between-participants prediction). Results showed that DTW provides potentially useful characteristics for EEG data analysis in a heterogeneous setting. In particular, theory-based segmentation of time-series data were particularly useful for DTW analysis while smoothing and standardization were detrimental when applied in a self-paced learning task.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"56-62"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72966050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
MoodTracker: Monitoring collective emotions in the workplace MoodTracker:监测工作场所的集体情绪
Yuliya Lutchyn, Paul Johns, A. Roseway, M. Czerwinski
Accurate and timely assessment of collective emotions in the workplace is a critical managerial task. However, perceptual, normative, and methodological challenges make it very difficult even for the most experienced organizational leaders. In this paper we present a MoodTracker - a technological solution that can help to overcome these challenges, and facilitate a continuous monitoring of the collective emotions in large groups in real-time. The MoodTracker is a program that runs on any PC device, and provides users with an interface for self-report of their affect. The device was tested in situ for four weeks, during which we received over 3000 emotion self-reports. Based on the usage data, we concluded that users had a positive attitude toward the MoodTracker and favorably evaluated its utility. From the collected data we were also able to establish some patterns of weekly and daily variations of employees' emotions in the workplace. We discuss practical applications and suggest directions for future development.
准确及时地评估工作场所的集体情绪是一项关键的管理任务。然而,感知、规范和方法上的挑战使得即使是最有经验的组织领导者也很难做到这一点。在本文中,我们提出了一种情绪追踪器——一种技术解决方案,可以帮助克服这些挑战,并促进对大群体中集体情绪的实时持续监测。MoodTracker是一个可以在任何PC设备上运行的程序,它为用户提供了一个自我报告自己情绪的界面。该装置在现场测试了四周,在此期间,我们收到了3000多份情绪自我报告。根据使用数据,我们得出结论,用户对MoodTracker持积极态度,并积极评价其实用性。从收集到的数据中,我们还能够建立一些每周和每天员工在工作场所情绪变化的模式。讨论了实际应用,并提出了未来的发展方向。
{"title":"MoodTracker: Monitoring collective emotions in the workplace","authors":"Yuliya Lutchyn, Paul Johns, A. Roseway, M. Czerwinski","doi":"10.1109/ACII.2015.7344586","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344586","url":null,"abstract":"Accurate and timely assessment of collective emotions in the workplace is a critical managerial task. However, perceptual, normative, and methodological challenges make it very difficult even for the most experienced organizational leaders. In this paper we present a MoodTracker - a technological solution that can help to overcome these challenges, and facilitate a continuous monitoring of the collective emotions in large groups in real-time. The MoodTracker is a program that runs on any PC device, and provides users with an interface for self-report of their affect. The device was tested in situ for four weeks, during which we received over 3000 emotion self-reports. Based on the usage data, we concluded that users had a positive attitude toward the MoodTracker and favorably evaluated its utility. From the collected data we were also able to establish some patterns of weekly and daily variations of employees' emotions in the workplace. We discuss practical applications and suggest directions for future development.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"176 1","pages":"295-301"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73204967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Synestouch: Haptic + audio affective design for wearable devices Synestouch:可穿戴设备的触觉+音频情感设计
P. Paredes, Ryuka Ko, Arezu Aghaseyedjavadi, J. Chuang, J. Canny, Linda Babler
Little is known about the affective expressivity of multisensory stimuli in wearable devices. While the theory of emotion has referenced single stimulus and multisensory experiments, it does not go further to explain the potential effects of sensorial stimuli when utilized in combination. In this paper, we present an analysis of the combinations of two sensory modalities - haptic (more specifically, vibrotactile) stimuli and auditory stimuli. We present the design of a wrist-worn wearable prototype and empirical data from a controlled experiment (N=40) and analyze emotional responses from a dimensional (arousal + valence) perspective. Differences are exposed between “matching” the emotions expressed through each modality, versus "mixing" auditory and haptic stimuli each expressing different emotions. We compare the effects of each condition to determine, for example, if the matching of two negative stimuli emotions will render a higher negative effect than the mixing of two mismatching emotions. The main research question that we study is: When haptic and auditory stimuli are combined, is there an interaction effect between the emotional type and the modality of the stimuli? We present quantitative and qualitative data to support our hypotheses, and complement it with a usability study to investigate the potential uses of the different modes. We conclude by discussing the implications for the design of affective interactions for wearable devices.
人们对可穿戴设备中多感官刺激的情感表达能力知之甚少。虽然情绪理论参考了单一刺激和多感官实验,但它并没有进一步解释感官刺激组合使用时的潜在影响。在本文中,我们提出了两种感觉模式的组合分析-触觉(更具体地说,振动触觉)刺激和听觉刺激。我们设计了一款腕戴式可穿戴设备原型,并从一个对照实验(N=40)中获得了经验数据,并从维度(觉醒+效价)的角度分析了情绪反应。通过每种模式“匹配”表达的情感与“混合”表达不同情感的听觉和触觉刺激之间存在差异。我们比较每个条件的影响,以确定,例如,如果两个负面刺激情绪的匹配会比两个不匹配的情绪的混合产生更高的负面影响。我们研究的主要问题是:当触觉和听觉刺激相结合时,刺激的情绪类型和刺激的模态之间是否存在交互作用?我们提供了定量和定性数据来支持我们的假设,并辅以可用性研究来调查不同模式的潜在用途。最后,我们讨论了可穿戴设备情感交互设计的含义。
{"title":"Synestouch: Haptic + audio affective design for wearable devices","authors":"P. Paredes, Ryuka Ko, Arezu Aghaseyedjavadi, J. Chuang, J. Canny, Linda Babler","doi":"10.1109/ACII.2015.7344630","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344630","url":null,"abstract":"Little is known about the affective expressivity of multisensory stimuli in wearable devices. While the theory of emotion has referenced single stimulus and multisensory experiments, it does not go further to explain the potential effects of sensorial stimuli when utilized in combination. In this paper, we present an analysis of the combinations of two sensory modalities - haptic (more specifically, vibrotactile) stimuli and auditory stimuli. We present the design of a wrist-worn wearable prototype and empirical data from a controlled experiment (N=40) and analyze emotional responses from a dimensional (arousal + valence) perspective. Differences are exposed between “matching” the emotions expressed through each modality, versus \"mixing\" auditory and haptic stimuli each expressing different emotions. We compare the effects of each condition to determine, for example, if the matching of two negative stimuli emotions will render a higher negative effect than the mixing of two mismatching emotions. The main research question that we study is: When haptic and auditory stimuli are combined, is there an interaction effect between the emotional type and the modality of the stimuli? We present quantitative and qualitative data to support our hypotheses, and complement it with a usability study to investigate the potential uses of the different modes. We conclude by discussing the implications for the design of affective interactions for wearable devices.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"92 28 1","pages":"595-601"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77718024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
On rater reliability and agreement based dynamic active learning 基于协议的动态主动学习的可靠性研究
Yue Zhang, E. Coutinho, Björn Schuller, Zixing Zhang, M. Adam
In this paper, we propose two novel Dynamic Active Learning (DAL) methods with the aim of ultimately reducing the costly human labelling work for subjective tasks such as speech emotion recognition. Compared to conventional Active Learning (AL) algorithms, the proposed DAL approaches employ a highly efficient adaptive query strategy that minimises the number of annotations through three advancements. First, we shift from the standard majority voting procedure, in which unlabelled instances are annotated by a fixed number of raters, to an agreement-based annotation technique that dynamically determines how many human annotators are required to label a selected instance. Second, we introduce the concept of the order-based DAL algorithm by considering rater reliability and inter-rater agreement. Third, a highly dynamic development trend is successfully implemented by upgrading the agreement levels depending on the prediction uncertainty. In extensive experiments on standardised test-beds, we show that the new dynamic methods significantly improve the efficiency of the existing AL algorithms by reducing human labelling effort up to 85.41%, while achieving the same classification accuracy. Thus, the enhanced DAL derivations opens up high-potential research directions for the utmost exploitation of unlabelled data.
在本文中,我们提出了两种新的动态主动学习(DAL)方法,旨在最终减少语音情感识别等主观任务中昂贵的人工标记工作。与传统的主动学习(AL)算法相比,本文提出的DAL方法采用了一种高效的自适应查询策略,通过三个改进将注释的数量降至最低。首先,我们从标准的多数投票过程(其中由固定数量的评分者对未标记的实例进行注释)转变为基于协议的注释技术,该技术动态地决定需要多少人对选定的实例进行注释。其次,我们引入了基于顺序的DAL算法的概念,考虑了分级可靠性和分级间一致性。第三,根据预测不确定性,通过提升协议级别,成功实现了高度动态的发展趋势。在标准化试验台的大量实验中,我们表明,新的动态方法显著提高了现有人工智能算法的效率,在达到相同的分类精度的同时,将人工标记的工作量减少了85.41%。因此,增强的DAL衍生为最大限度地利用未标记数据开辟了高潜力的研究方向。
{"title":"On rater reliability and agreement based dynamic active learning","authors":"Yue Zhang, E. Coutinho, Björn Schuller, Zixing Zhang, M. Adam","doi":"10.1109/ACII.2015.7344553","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344553","url":null,"abstract":"In this paper, we propose two novel Dynamic Active Learning (DAL) methods with the aim of ultimately reducing the costly human labelling work for subjective tasks such as speech emotion recognition. Compared to conventional Active Learning (AL) algorithms, the proposed DAL approaches employ a highly efficient adaptive query strategy that minimises the number of annotations through three advancements. First, we shift from the standard majority voting procedure, in which unlabelled instances are annotated by a fixed number of raters, to an agreement-based annotation technique that dynamically determines how many human annotators are required to label a selected instance. Second, we introduce the concept of the order-based DAL algorithm by considering rater reliability and inter-rater agreement. Third, a highly dynamic development trend is successfully implemented by upgrading the agreement levels depending on the prediction uncertainty. In extensive experiments on standardised test-beds, we show that the new dynamic methods significantly improve the efficiency of the existing AL algorithms by reducing human labelling effort up to 85.41%, while achieving the same classification accuracy. Thus, the enhanced DAL derivations opens up high-potential research directions for the utmost exploitation of unlabelled data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"13 1","pages":"70-76"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81551658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
EmoShapelets: Capturing local dynamics of audio-visual affective speech EmoShapelets:捕捉视听情感语音的局部动态
Y. Shangguan, E. Provost
Automatic recognition of emotion in speech is an active area of research. One of the important open challenges relates to how the emotional characteristics of speech change in time. Past research has demonstrated the importance of capturing global dynamics (across an entire utterance) and local dynamics (within segments of an utterance). In this paper, we propose a novel concept, EmoShapelets, to capture the local dynamics in speech. EmoShapelets capture changes in emotion that occur within utterances. We propose a framework to generate, update, and select EmoShapelets. We also demonstrate the discriminative power of EmoShapelets by using them with various classifiers to achieve comparable results with the state-of-the-art systems on the IEMOCAP dataset. EmoShapelets can serve as basic units of emotion expression and provide additional evidence supporting the existence of local patterns of emotion underlying human communication.
语音中情绪的自动识别是一个活跃的研究领域。其中一个重要的公开挑战涉及语言的情感特征如何随时间变化。过去的研究已经证明了捕捉全局动态(跨越整个话语)和局部动态(在话语的片段内)的重要性。在本文中,我们提出了一个新颖的概念,EmoShapelets,以捕捉语音中的局部动态。EmoShapelets捕捉话语中出现的情绪变化。我们提出了一个框架来生成、更新和选择EmoShapelets。我们还通过将EmoShapelets与各种分类器一起使用来实现与IEMOCAP数据集上最先进系统的比较结果,从而展示了EmoShapelets的判别能力。EmoShapelets可以作为情感表达的基本单位,并为支持人类交流中存在局部情感模式提供了额外的证据。
{"title":"EmoShapelets: Capturing local dynamics of audio-visual affective speech","authors":"Y. Shangguan, E. Provost","doi":"10.1109/ACII.2015.7344576","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344576","url":null,"abstract":"Automatic recognition of emotion in speech is an active area of research. One of the important open challenges relates to how the emotional characteristics of speech change in time. Past research has demonstrated the importance of capturing global dynamics (across an entire utterance) and local dynamics (within segments of an utterance). In this paper, we propose a novel concept, EmoShapelets, to capture the local dynamics in speech. EmoShapelets capture changes in emotion that occur within utterances. We propose a framework to generate, update, and select EmoShapelets. We also demonstrate the discriminative power of EmoShapelets by using them with various classifiers to achieve comparable results with the state-of-the-art systems on the IEMOCAP dataset. EmoShapelets can serve as basic units of emotion expression and provide additional evidence supporting the existence of local patterns of emotion underlying human communication.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"32 1","pages":"229-235"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82806884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Multimodal approach for automatic recognition of machiavellianism 马基雅维利主义自动识别的多模态方法
Zahra Nazari, Gale M. Lucas, J. Gratch
Machiavellianism, by definition, is the tendency to use other people as a tool to achieve one's own goals. Despite the large focus on the Big Five traits of personality, this anti-social trait is relatively unexplored in the computational realm. Automatically recognizing anti-social traits can have important uses across a variety of applications. In this paper, we use negotiation as a setting that provides Machiavellians with the opportunity to reveal their exploitative inclinations. We use textual, visual, acoustic, and behavioral cues to automatically predict High vs. Low Machiavellian personalities. These learned models have good accuracy when compared with other personality-recognition methods, and we provide evidence that the automatically-learned models are consistent with existing literature on this anti-social trait, giving evidence that these results can generalize to other domains.
马基雅维利主义,顾名思义,是一种利用他人作为工具来实现自己目标的倾向。尽管人们非常关注人格的五大特征,但这种反社会特征在计算领域的探索相对较少。自动识别反社会特征在各种应用中都有重要的用途。在本文中,我们将谈判作为一种背景,为马基雅维利主义者提供了揭示其剥削倾向的机会。我们使用文字、视觉、听觉和行为线索来自动预测高与低马基雅维利人格。与其他人格识别方法相比,这些学习模型具有良好的准确性,并且我们提供的证据表明,这些自动学习模型与现有的关于这一反社会特征的文献一致,从而证明这些结果可以推广到其他领域。
{"title":"Multimodal approach for automatic recognition of machiavellianism","authors":"Zahra Nazari, Gale M. Lucas, J. Gratch","doi":"10.1109/ACII.2015.7344574","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344574","url":null,"abstract":"Machiavellianism, by definition, is the tendency to use other people as a tool to achieve one's own goals. Despite the large focus on the Big Five traits of personality, this anti-social trait is relatively unexplored in the computational realm. Automatically recognizing anti-social traits can have important uses across a variety of applications. In this paper, we use negotiation as a setting that provides Machiavellians with the opportunity to reveal their exploitative inclinations. We use textual, visual, acoustic, and behavioral cues to automatically predict High vs. Low Machiavellian personalities. These learned models have good accuracy when compared with other personality-recognition methods, and we provide evidence that the automatically-learned models are consistent with existing literature on this anti-social trait, giving evidence that these results can generalize to other domains.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"56 23","pages":"215-221"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91420578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Facial expression recognition with multithreaded cascade of rotation-invariant HOG 基于旋转不变HOG的多线程级联面部表情识别
Jinhui Chen, T. Takiguchi, Y. Ariki
We propose a novel and general framework, named the multithreading cascade of rotation-invariant histograms of oriented gradients (McRiHOG) for facial expression recognition (FER). In this paper, we attempt to solve two problems about high-quality local feature descriptors and robust classifying algorithm for FER. The first solution is that we adopt annular spatial bins type HOG (Histograms of Oriented Gradients) descriptors to describe local patches. In this way, it significantly enhances the descriptors in regard to rotation-invariant ability and feature description accuracy; The second one is that we use a novel multithreading cascade to simultaneously learn multiclass data. Multithreading cascade is implemented through non-interfering boosting channels, which are respectively built to train weak classifiers for each expression. The superiority of McRiHOG over current state-of-the-art methods is clearly demonstrated by evaluation experiments based on three popular public databases, CK+, MMI, and AFEW.
我们提出了一种新的通用框架,称为面向梯度旋转不变直方图的多线程级联(McRiHOG)。本文试图解决高质量局部特征描述子和鲁棒分类算法两个问题。第一个解决方案是采用环形空间箱型HOG (Histograms of Oriented Gradients)描述符来描述局部斑块。这样,显著提高了描述子的旋转不变性能力和特征描述精度;其次,我们使用了一种新颖的多线程级联来同时学习多类数据。多线程级联通过互不干扰的增强通道实现,增强通道分别为每个表达式训练弱分类器。基于三个流行的公共数据库(CK+, MMI和few)的评估实验清楚地证明了McRiHOG优于当前最先进的方法。
{"title":"Facial expression recognition with multithreaded cascade of rotation-invariant HOG","authors":"Jinhui Chen, T. Takiguchi, Y. Ariki","doi":"10.1109/ACII.2015.7344636","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344636","url":null,"abstract":"We propose a novel and general framework, named the multithreading cascade of rotation-invariant histograms of oriented gradients (McRiHOG) for facial expression recognition (FER). In this paper, we attempt to solve two problems about high-quality local feature descriptors and robust classifying algorithm for FER. The first solution is that we adopt annular spatial bins type HOG (Histograms of Oriented Gradients) descriptors to describe local patches. In this way, it significantly enhances the descriptors in regard to rotation-invariant ability and feature description accuracy; The second one is that we use a novel multithreading cascade to simultaneously learn multiclass data. Multithreading cascade is implemented through non-interfering boosting channels, which are respectively built to train weak classifiers for each expression. The superiority of McRiHOG over current state-of-the-art methods is clearly demonstrated by evaluation experiments based on three popular public databases, CK+, MMI, and AFEW.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"20 1","pages":"636-642"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88182701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1