首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
Modelling the influence of personality and culture on affect and enjoyment in multimedia 模拟个性和文化对多媒体情感和享受的影响
Sharath Chandra Guntuku, Weisi Lin, M. A. Scott, G. Ghinea
Affect is evoked through an intricate relationship between the characteristics of stimuli, individuals, and systems of perception. While affect is widely researched, few studies consider the combination of multimedia system characteristics and human factors together. As such, this paper explores tpersonality (Five-Factor Model) and cultural traits (Hofstede Model) on the intensity of multimedia-evoked positive and negative affects (emotions). A set of 144 video sequences (from 12 short movie clips) were evaluated by 114 participants from a cross-cultural population, producing 1232 ratings. On this data, threehe influence of personality (Five-Factor Model) and cultural traits (Hofstede Model) on the intensity of multimedia-evoked positive and negative affects (emotions). A set of 144 video sequences (from 12 short movie clips) were evaluated by 114 participants from a cross-cultural population, producing 1232 ratings. On this data, three multilevel regression models are compared: a baseline model that only considers system factors; an extended model that includes personality and culture; and an optimistic model in which each participant is modelled. An analysis shows that personal and cultural traits represent 5.6% of the variance in positive affect and 13.6% of the variance in negative affect. In addition, the affect-enjoyment correlation varied across the clips. This suggests that personality and culture play a key role in predicting the intensity of negative affect and whether or not it is enjoyed, but a more sophisticated set of predictors is needed to model positive affect with the same efficacy.
情感是通过刺激特征、个体和感知系统之间的复杂关系而产生的。虽然影响的研究非常广泛,但很少有研究将多媒体系统的特性和人为因素结合起来考虑。因此,本文探讨了人格(五因素模型)和文化特征(Hofstede模型)对多媒体诱发的积极和消极影响(情绪)强度的影响。来自跨文化人群的114名参与者评估了一组144个视频序列(来自12个短片片段),产生了1232个评分。在此数据上,人格(五因素模型)和文化特质(Hofstede模型)对多媒体诱发的积极和消极影响(情绪)强度的影响。来自跨文化人群的114名参与者评估了一组144个视频序列(来自12个短片片段),产生了1232个评分。在此基础上,比较了三种多水平回归模型:仅考虑系统因素的基线模型;包含个性和文化的扩展模型;还有一个乐观的模型,每个参与者都是模型。一项分析表明,个人和文化特质分别占积极影响方差的5.6%和消极影响方差的13.6%。此外,情感与享受的相关性在不同的片段中也有所不同。这表明性格和文化在预测消极情绪的强度和是否被享受方面起着关键作用,但需要一套更复杂的预测因素来模拟同样有效的积极情绪。
{"title":"Modelling the influence of personality and culture on affect and enjoyment in multimedia","authors":"Sharath Chandra Guntuku, Weisi Lin, M. A. Scott, G. Ghinea","doi":"10.1109/ACII.2015.7344577","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344577","url":null,"abstract":"Affect is evoked through an intricate relationship between the characteristics of stimuli, individuals, and systems of perception. While affect is widely researched, few studies consider the combination of multimedia system characteristics and human factors together. As such, this paper explores tpersonality (Five-Factor Model) and cultural traits (Hofstede Model) on the intensity of multimedia-evoked positive and negative affects (emotions). A set of 144 video sequences (from 12 short movie clips) were evaluated by 114 participants from a cross-cultural population, producing 1232 ratings. On this data, threehe influence of personality (Five-Factor Model) and cultural traits (Hofstede Model) on the intensity of multimedia-evoked positive and negative affects (emotions). A set of 144 video sequences (from 12 short movie clips) were evaluated by 114 participants from a cross-cultural population, producing 1232 ratings. On this data, three multilevel regression models are compared: a baseline model that only considers system factors; an extended model that includes personality and culture; and an optimistic model in which each participant is modelled. An analysis shows that personal and cultural traits represent 5.6% of the variance in positive affect and 13.6% of the variance in negative affect. In addition, the affect-enjoyment correlation varied across the clips. This suggests that personality and culture play a key role in predicting the intensity of negative affect and whether or not it is enjoyed, but a more sophisticated set of predictors is needed to model positive affect with the same efficacy.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"16 1","pages":"236-242"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85818646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Saying YES! The cross-cultural complexities of favors and trust in human-agent negotiation 说的是啊!人-代理谈判中偏好与信任的跨文化复杂性
Johnathan Mell, Gale M. Lucas, J. Gratch, A. Rosenfeld
Negotiation between virtual agents and humans is a complex field that requires designers of systems to be aware not only of the efficient solutions to a given game, but also the mechanisms by which humans create value over multiple negotiations. One way of considering the agent's impact beyond a single negotiation session is by considering the use of external “ledgers” across multiple sessions. We present results that describe the effects of favor exchange on negotiation outcomes, fairness, and trust for two distinct cross-cultural populations, and illustrate the ramifications of their similarities and differences on virtual agent design.
虚拟代理和人类之间的谈判是一个复杂的领域,要求系统设计师不仅要了解给定游戏的有效解决方案,还要了解人类在多次谈判中创造价值的机制。考虑代理在单个协商会话之外的影响的一种方法是考虑跨多个会话使用外部“分类账”。我们的研究结果描述了两种不同的跨文化人群的偏好交换对谈判结果、公平和信任的影响,并说明了它们的相似性和差异对虚拟代理设计的影响。
{"title":"Saying YES! The cross-cultural complexities of favors and trust in human-agent negotiation","authors":"Johnathan Mell, Gale M. Lucas, J. Gratch, A. Rosenfeld","doi":"10.1109/ACII.2015.7344571","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344571","url":null,"abstract":"Negotiation between virtual agents and humans is a complex field that requires designers of systems to be aware not only of the efficient solutions to a given game, but also the mechanisms by which humans create value over multiple negotiations. One way of considering the agent's impact beyond a single negotiation session is by considering the use of external “ledgers” across multiple sessions. We present results that describe the effects of favor exchange on negotiation outcomes, fairness, and trust for two distinct cross-cultural populations, and illustrate the ramifications of their similarities and differences on virtual agent design.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"1 1","pages":"194-200"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79315145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multimodal depression recognition with dynamic visual and audio cues 基于动态视觉和音频线索的多模态抑郁症识别
Lang He, D. Jiang, H. Sahli
In this paper, we present our system design for audio visual multi-modal depression recognition. To improve the estimation accuracy of the Beck Depression Inventory (BDI) score, besides the Low Level Descriptors (LLD) features and the Local Gabor Binary Pattern-Three Orthogonal Planes (LGBP-TOP) features provided by the 2014 Audio/Visual Emotion Challenge and Workshop (AVEC2014), we extract extra features to capture key behavioural changes associated with depression. From audio we extract the speaking rate, and from video, the head pose features, the Space-Temporal Interesting Point (STIP) features, and local kinematic features via the Divergence-Curl-Shear descriptors. These features describe body movements, and spatio-temporal changes within the image sequence. We also consider global dynamic features, obtained using motion history histogram (MHH), bag of words (BOW) features and vector of local aggregated descriptors (VLAD). To capture the complementary information within the used features, we evaluate two fusion systems - the feature fusion scheme, and the model fusion scheme via local linear regression (LLR). Experiments are carried out on the training set and development set of the Depression Recognition Sub-Challenge (DSC) of AVEC2014, we obtain root mean square error (RMSE) of 7.6697, and mean absolute error (MAE) of 6.1683 on the development set, which are better or comparable with the state of the art results of the AVEC2014 challenge.
在本文中,我们提出了一个音视频多模态抑郁症识别系统的设计。为了提高贝克抑郁量表(BDI)评分的估计精度,除了2014年音频/视觉情绪挑战与研讨会(AVEC2014)提供的低水平描述符(LLD)特征和局部Gabor二元模式-三正交平面(LGBP-TOP)特征外,我们提取了额外的特征来捕捉与抑郁相关的关键行为变化。我们从音频中提取说话速率,从视频中提取头部姿态特征、时空兴趣点(STIP)特征和局部运动特征,通过发散-卷曲-剪切描述子。这些特征描述了身体运动和图像序列中的时空变化。我们还考虑了使用运动历史直方图(MHH)、词袋(BOW)特征和局部聚合描述符向量(VLAD)获得的全局动态特征。为了捕获所使用特征中的互补信息,我们评估了两种融合系统-特征融合方案和基于局部线性回归(LLR)的模型融合方案。在AVEC2014抑郁症识别子挑战(DSC)的训练集和开发集上进行了实验,开发集上的均方根误差(RMSE)为7.6697,平均绝对误差(MAE)为6.1683,优于或可与AVEC2014挑战的最新结果相媲美。
{"title":"Multimodal depression recognition with dynamic visual and audio cues","authors":"Lang He, D. Jiang, H. Sahli","doi":"10.1109/ACII.2015.7344581","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344581","url":null,"abstract":"In this paper, we present our system design for audio visual multi-modal depression recognition. To improve the estimation accuracy of the Beck Depression Inventory (BDI) score, besides the Low Level Descriptors (LLD) features and the Local Gabor Binary Pattern-Three Orthogonal Planes (LGBP-TOP) features provided by the 2014 Audio/Visual Emotion Challenge and Workshop (AVEC2014), we extract extra features to capture key behavioural changes associated with depression. From audio we extract the speaking rate, and from video, the head pose features, the Space-Temporal Interesting Point (STIP) features, and local kinematic features via the Divergence-Curl-Shear descriptors. These features describe body movements, and spatio-temporal changes within the image sequence. We also consider global dynamic features, obtained using motion history histogram (MHH), bag of words (BOW) features and vector of local aggregated descriptors (VLAD). To capture the complementary information within the used features, we evaluate two fusion systems - the feature fusion scheme, and the model fusion scheme via local linear regression (LLR). Experiments are carried out on the training set and development set of the Depression Recognition Sub-Challenge (DSC) of AVEC2014, we obtain root mean square error (RMSE) of 7.6697, and mean absolute error (MAE) of 6.1683 on the development set, which are better or comparable with the state of the art results of the AVEC2014 challenge.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"17 1","pages":"260-266"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81031062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
A new approach for pain event detection in video 视频中疼痛事件检测的新方法
Junkai Chen, Z. Chi, Hong Fu
A new approach for pain event detection in video is presented in this paper. Different from some previous works which focused on frame-based detection, we target in detecting pain events at video level. In this work, we explore the spatial information of video frames and dynamic textures of video sequences, and propose two different types of features. HOG of fiducial points (P-HOG) is employed to extract spatial features from video frames and HOG from Three Orthogonal Planes (HOG-TOP) is used to represent dynamic textures of video subsequences. After that, we apply max pooling to represent a video sequence as a global feature vector. Multiple Kernel Learning (MKL) is utilized to find an optimal fusion of the two types of features. And an SVM with multiple kernels is trained to perform the final classification. We conduct our experiments on the UNBC-McMaster Shoulder Pain dataset and achieve promising results, showing the effectiveness of our approach.
提出了一种新的视频疼痛事件检测方法。不同于以往的基于帧的检测,我们的目标是在视频级别检测疼痛事件。在这项工作中,我们探索了视频帧的空间信息和视频序列的动态纹理,并提出了两种不同类型的特征。利用基准点HOG (P-HOG)提取视频帧的空间特征,利用三正交平面HOG (HOG- top)表示视频子序列的动态纹理。然后,我们应用最大池化将视频序列表示为全局特征向量。利用多核学习(Multiple Kernel Learning, MKL)来寻找两类特征的最优融合。并训练一个多核支持向量机进行最终分类。我们在UNBC-McMaster肩膀疼痛数据集上进行了实验,并取得了令人鼓舞的结果,显示了我们方法的有效性。
{"title":"A new approach for pain event detection in video","authors":"Junkai Chen, Z. Chi, Hong Fu","doi":"10.1109/ACII.2015.7344579","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344579","url":null,"abstract":"A new approach for pain event detection in video is presented in this paper. Different from some previous works which focused on frame-based detection, we target in detecting pain events at video level. In this work, we explore the spatial information of video frames and dynamic textures of video sequences, and propose two different types of features. HOG of fiducial points (P-HOG) is employed to extract spatial features from video frames and HOG from Three Orthogonal Planes (HOG-TOP) is used to represent dynamic textures of video subsequences. After that, we apply max pooling to represent a video sequence as a global feature vector. Multiple Kernel Learning (MKL) is utilized to find an optimal fusion of the two types of features. And an SVM with multiple kernels is trained to perform the final classification. We conduct our experiments on the UNBC-McMaster Shoulder Pain dataset and achieve promising results, showing the effectiveness of our approach.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"71 1","pages":"250-254"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83229565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semi-supervised emotional classification of color images by learning from cloud 基于云学习的彩色图像半监督情感分类
Na Li, Yong Xia, Yuwei Xia
Classification of images based on the feelings generated by each image in its reviewers is becoming more and more popular. Due to the difficulty of gathering training data, this task is intrinsically a small-sample learning problem. Hence, the results produced by most existing solutions are less accurate. In this paper, we propose the semi-supervised hierarchical classification (SSHC) algorithm for emotional classification of color images. We extract three groups of features for each classification task and use those features in a two-level classification model that is based on the support vector machine (SVM) and Adaboost technique. To enlarge the training dataset, we employ each training image to retrieve similar images from the Internet cloud and jointly use the manually labeled small dataset and retrieved large but unlabeled dataset to train a classifier via semi-supervised learning. We have evaluated the proposed algorithm against the fuzzy similarity-based emotional classification (FSBEC) algorithm and another supervised hierarchical classification algorithm that does not learn from online images in three bi-class classification tasks, including “warm vs. cool”, “light vs. heavy” and “static vs. dynamic”. Our pilot results suggest that, by learning from the similar images archived in the Internet cloud, the proposed SSHC algorithm can produce more accurate emotional classification of color images.
基于每个图像在其评论者中产生的感觉的图像分类正变得越来越流行。由于收集训练数据的困难,该任务本质上是一个小样本学习问题。因此,大多数现有解决方案产生的结果都不太准确。在本文中,我们提出了半监督层次分类(SSHC)算法用于彩色图像的情感分类。我们为每个分类任务提取三组特征,并在基于支持向量机(SVM)和Adaboost技术的两级分类模型中使用这些特征。为了扩大训练数据集,我们利用每个训练图像从互联网云中检索相似的图像,并联合使用人工标记的小数据集和检索到的未标记的大数据集,通过半监督学习训练分类器。我们将所提出的算法与基于模糊相似度的情感分类(FSBEC)算法和另一种不从在线图像中学习的监督分层分类算法进行了对比,在三个双类分类任务中,包括“暖与冷”、“轻与重”和“静态与动态”。我们的实验结果表明,通过学习互联网云中存档的相似图像,所提出的SSHC算法可以对彩色图像进行更准确的情感分类。
{"title":"Semi-supervised emotional classification of color images by learning from cloud","authors":"Na Li, Yong Xia, Yuwei Xia","doi":"10.1109/ACII.2015.7344555","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344555","url":null,"abstract":"Classification of images based on the feelings generated by each image in its reviewers is becoming more and more popular. Due to the difficulty of gathering training data, this task is intrinsically a small-sample learning problem. Hence, the results produced by most existing solutions are less accurate. In this paper, we propose the semi-supervised hierarchical classification (SSHC) algorithm for emotional classification of color images. We extract three groups of features for each classification task and use those features in a two-level classification model that is based on the support vector machine (SVM) and Adaboost technique. To enlarge the training dataset, we employ each training image to retrieve similar images from the Internet cloud and jointly use the manually labeled small dataset and retrieved large but unlabeled dataset to train a classifier via semi-supervised learning. We have evaluated the proposed algorithm against the fuzzy similarity-based emotional classification (FSBEC) algorithm and another supervised hierarchical classification algorithm that does not learn from online images in three bi-class classification tasks, including “warm vs. cool”, “light vs. heavy” and “static vs. dynamic”. Our pilot results suggest that, by learning from the similar images archived in the Internet cloud, the proposed SSHC algorithm can produce more accurate emotional classification of color images.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"1 1","pages":"84-90"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83096511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Hierarchical modeling of temporal course in emotional expression for speech emotion recognition 面向语音情感识别的情感表达时间过程分层建模
Chung-Hsien Wu, Wei-Bin Liang, Kuan-Chun Cheng, Jen-Chun Lin
This paper presents an approach to hierarchical modeling of temporal course in emotional expression for speech emotion recognition. In the proposed approach, a segmentation algorithm is employed to hierarchically chunk an input utterance into three-level temporal units, including low-level descriptors (LLDs)-based sub-utterance level, emotion profile (EP)-based sub-utterance level and utterance level. An emotion-oriented hierarchical structure is constructed based on the three-level units to describe the temporal emotion expression in an utterance. A hierarchical correlation model is also proposed to fuse the three-level outputs from the corresponding emotion recognizers and further model the correlation among them to determine the emotional state of the utterance. The EMO-DB corpus was used to evaluate the performance on speech emotion recognition. Experimental results show that the proposed method considering the temporal course in emotional expression provides the potential to improve the speech emotion recognition performance.
提出了一种面向语音情感识别的情感表达时间过程分层建模方法。在该方法中,采用一种分割算法将输入话语分层划分为三个层次的时间单元,包括基于低级描述符(low-level descriptor, LLDs)的子话语水平、基于情感轮廓(emotion profile, EP)的子话语水平和话语水平。基于三个层次单元,构建了一个面向情感的层次结构来描述话语中的时间情感表达。提出了一种层次关联模型,融合相应情绪识别器的三层输出,并进一步建立它们之间的关联模型,以确定话语的情绪状态。利用EMO-DB语料库对语音情感识别性能进行评价。实验结果表明,该方法考虑了情绪表达的时间过程,为提高语音情绪识别性能提供了潜力。
{"title":"Hierarchical modeling of temporal course in emotional expression for speech emotion recognition","authors":"Chung-Hsien Wu, Wei-Bin Liang, Kuan-Chun Cheng, Jen-Chun Lin","doi":"10.1109/ACII.2015.7344666","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344666","url":null,"abstract":"This paper presents an approach to hierarchical modeling of temporal course in emotional expression for speech emotion recognition. In the proposed approach, a segmentation algorithm is employed to hierarchically chunk an input utterance into three-level temporal units, including low-level descriptors (LLDs)-based sub-utterance level, emotion profile (EP)-based sub-utterance level and utterance level. An emotion-oriented hierarchical structure is constructed based on the three-level units to describe the temporal emotion expression in an utterance. A hierarchical correlation model is also proposed to fuse the three-level outputs from the corresponding emotion recognizers and further model the correlation among them to determine the emotional state of the utterance. The EMO-DB corpus was used to evaluate the performance on speech emotion recognition. Experimental results show that the proposed method considering the temporal course in emotional expression provides the potential to improve the speech emotion recognition performance.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"27 1","pages":"810-814"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88475715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
To rank or to classify? Annotating stress for reliable PTSD profiling 排序还是分类?为可靠的PTSD分析注释压力
Christoffer Holmgård, Georgios N. Yannakakis, H. P. Martínez, Karen-Inge Karstoft
In this paper we profile the stress responses of patients diagnosed with post-traumatic stress disorder (PTSD) to individual events in the game-based PTSD stress inoculation and exposure virtual environment StartleMart. Thirteen veterans suffering from PTSD play the game while we record their skin conductance. Game logs are used to identify individual events, and continuous decomposition analysis is applied to the skin conductance signals to derive event-related stress responses. The extracted skin conductance features from this analysis are used to profile each individual player in terms of stress response. We observe a large degree of variation across the 13 veterans which further validates the idiosyncratic nature of PTSD physiological manifestations. Further to game data and skin conductance signals we ask PTSD patients to indicate the most stressful event experienced (class-based annotation) and also compare the stress level of all events in a pairwise preference manner (rank-based annotation). We compare the two annotation stress schemes by correlating the self-reports to individual event-based stress manifestations. The self-reports collected through class-based annotation exhibit no correlation to physiological responses, whereas, the pairwise preferences yield significant correlations to all skin conductance features extracted via continuous decomposition analysis. The core findings of the paper suggest that reporting of stress preferences across events yields more reliable data that capture aspects of the stress experienced and that features extracted from skin conductance via continuous decomposition analysis offer appropriate predictors of stress manifestation across PTSD patients.
在基于游戏的创伤后应激障碍(PTSD)应激接种和暴露虚拟环境StartleMart中,研究创伤后应激障碍(PTSD)患者对个体事件的应激反应。13名患有创伤后应激障碍的退伍军人在玩游戏时,我们会记录他们的皮肤电导。游戏日志用于识别单个事件,并对皮肤电导信号进行连续分解分析,以获得与事件相关的应激反应。从这个分析中提取的皮肤电导特征被用来描述每个球员的压力反应。我们在13名退伍军人中观察到很大程度的差异,这进一步证实了创伤后应激障碍生理表现的特殊性。除了游戏数据和皮肤电导信号外,我们还要求PTSD患者指出经历过的最大压力事件(基于类别的注释),并以两两偏好的方式比较所有事件的压力水平(基于排名的注释)。我们通过将自我报告与个体基于事件的应激表现相关联来比较两种注释应激方案。通过基于类别的注释收集的自我报告与生理反应没有相关性,然而,两两偏好与通过连续分解分析提取的所有皮肤电导特征产生显著相关性。该论文的核心发现表明,跨事件的压力偏好报告产生了更可靠的数据,这些数据捕获了所经历的压力的各个方面,并且通过持续分解分析从皮肤电导中提取的特征为PTSD患者的压力表现提供了适当的预测因子。
{"title":"To rank or to classify? Annotating stress for reliable PTSD profiling","authors":"Christoffer Holmgård, Georgios N. Yannakakis, H. P. Martínez, Karen-Inge Karstoft","doi":"10.1109/ACII.2015.7344648","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344648","url":null,"abstract":"In this paper we profile the stress responses of patients diagnosed with post-traumatic stress disorder (PTSD) to individual events in the game-based PTSD stress inoculation and exposure virtual environment StartleMart. Thirteen veterans suffering from PTSD play the game while we record their skin conductance. Game logs are used to identify individual events, and continuous decomposition analysis is applied to the skin conductance signals to derive event-related stress responses. The extracted skin conductance features from this analysis are used to profile each individual player in terms of stress response. We observe a large degree of variation across the 13 veterans which further validates the idiosyncratic nature of PTSD physiological manifestations. Further to game data and skin conductance signals we ask PTSD patients to indicate the most stressful event experienced (class-based annotation) and also compare the stress level of all events in a pairwise preference manner (rank-based annotation). We compare the two annotation stress schemes by correlating the self-reports to individual event-based stress manifestations. The self-reports collected through class-based annotation exhibit no correlation to physiological responses, whereas, the pairwise preferences yield significant correlations to all skin conductance features extracted via continuous decomposition analysis. The core findings of the paper suggest that reporting of stress preferences across events yields more reliable data that capture aspects of the stress experienced and that features extracted from skin conductance via continuous decomposition analysis offer appropriate predictors of stress manifestation across PTSD patients.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"719-725"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87920872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Automated recognition of complex categorical emotions from facial expressions and head motions 从面部表情和头部动作中自动识别复杂的分类情绪
Andra Adams, P. Robinson
Classifying complex categorical emotions has been a relatively unexplored area of affective computing. We present a classifier trained to recognize 18 complex emotion categories. A leave-one-out training approach was used on 181 acted videos from the EU-Emotion Stimulus Set. Performance scores for the 18-choice classification problem were AROC = 0.84, 2AFC = 0.84, F1 = 0.33, Accuracy = 0.47. On a simplified 6-choice classification problem, the classifier had an accuracy of 0.64 compared with the validated human accuracy of 0.74. The classifier has been integrated into an expression training interface which gives meaningful feedback to humans on their portrayal of complex emotions through face and head movements. This work has applications as an intervention for Autism Spectrum Conditions.
分类复杂的分类情绪一直是情感计算的一个相对未开发的领域。我们提出了一个分类器训练识别18个复杂的情绪类别。对欧盟情绪刺激集的181个动作视频采用了“留一”训练方法。18选项分类问题的性能得分为AROC = 0.84, 2AFC = 0.84, F1 = 0.33,准确率= 0.47。在一个简化的6选项分类问题上,该分类器的准确率为0.64,而经过验证的人类准确率为0.74。该分类器已集成到表情训练界面中,该界面通过面部和头部运动向人类提供有意义的反馈,以描述复杂的情绪。这项工作可以应用于自闭症谱系疾病的干预。
{"title":"Automated recognition of complex categorical emotions from facial expressions and head motions","authors":"Andra Adams, P. Robinson","doi":"10.1109/ACII.2015.7344595","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344595","url":null,"abstract":"Classifying complex categorical emotions has been a relatively unexplored area of affective computing. We present a classifier trained to recognize 18 complex emotion categories. A leave-one-out training approach was used on 181 acted videos from the EU-Emotion Stimulus Set. Performance scores for the 18-choice classification problem were AROC = 0.84, 2AFC = 0.84, F1 = 0.33, Accuracy = 0.47. On a simplified 6-choice classification problem, the classifier had an accuracy of 0.64 compared with the validated human accuracy of 0.74. The classifier has been integrated into an expression training interface which gives meaningful feedback to humans on their portrayal of complex emotions through face and head movements. This work has applications as an intervention for Autism Spectrum Conditions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"29 1","pages":"355-361"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74225878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
People show envy, not guilt, when making decisions with machines 当人们用机器做决定时,他们会表现出嫉妒,而不是内疚
C. D. Melo, J. Gratch
Research shows that people consistently reach more efficient solutions than those predicted by standard economic models, which assume people are selfish. Artificial intelligence, in turn, seeks to create machines that can achieve these levels of efficiency in human-machine interaction. However, as reinforced in this paper, people's decisions are systematically less efficient - i.e., less fair and favorable - with machines than with humans. To understand the cause of this bias, we resort to a well-known experimental economics model: Fehr and Schmidt's inequity aversion model. This model accounts for people's aversion to disadvantageous outcome inequality (envy) and aversion to advantageous outcome inequality (guilt). We present an experiment where participants engaged in the ultimatum and dictator games with human or machine counterparts. By fitting this data to Fehr and Schmidt's model, we show that people acted as if they were just as envious of humans as of machines; but, in contrast, people showed less guilt when making unfavorable decisions to machines. This result, thus, provides critical insight into this bias people show, in economic settings, in favor of humans. We discuss implications for the design of machines that engage in social decision making with humans.
研究表明,人们总是能找到比标准经济模型预测的更有效的解决方案,标准经济模型假设人们是自私的。而人工智能则试图创造出能够在人机交互中达到这些效率水平的机器。然而,正如本文所强调的那样,与人类相比,人们在机器面前的决策系统效率更低——也就是说,更不公平和更有利。为了理解这种偏见的原因,我们求助于一个著名的实验经济学模型:费尔和施密特的不平等厌恶模型。这个模型解释了人们对不利结果不平等的厌恶(嫉妒)和对有利结果不平等的厌恶(内疚)。我们提出了一个实验,参与者参与最后通牒和独裁者游戏与人类或机器对手。通过将这些数据与Fehr和Schmidt的模型相匹配,我们发现人们对人类和机器的嫉妒程度是一样的;但是,相比之下,人们在对机器做出不利决定时表现出较少的内疚感。因此,这一结果为人们在经济环境中表现出的对人类有利的偏见提供了关键的见解。我们讨论了与人类一起参与社会决策的机器设计的含义。
{"title":"People show envy, not guilt, when making decisions with machines","authors":"C. D. Melo, J. Gratch","doi":"10.1109/ACII.2015.7344589","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344589","url":null,"abstract":"Research shows that people consistently reach more efficient solutions than those predicted by standard economic models, which assume people are selfish. Artificial intelligence, in turn, seeks to create machines that can achieve these levels of efficiency in human-machine interaction. However, as reinforced in this paper, people's decisions are systematically less efficient - i.e., less fair and favorable - with machines than with humans. To understand the cause of this bias, we resort to a well-known experimental economics model: Fehr and Schmidt's inequity aversion model. This model accounts for people's aversion to disadvantageous outcome inequality (envy) and aversion to advantageous outcome inequality (guilt). We present an experiment where participants engaged in the ultimatum and dictator games with human or machine counterparts. By fitting this data to Fehr and Schmidt's model, we show that people acted as if they were just as envious of humans as of machines; but, in contrast, people showed less guilt when making unfavorable decisions to machines. This result, thus, provides critical insight into this bias people show, in economic settings, in favor of humans. We discuss implications for the design of machines that engage in social decision making with humans.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"56 1","pages":"315-321"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76160958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Cross-corpus analysis for acoustic recognition of negative interactions 负交互声识别的跨语料库分析
I. Lefter, H. Nefs, C. Jonker, L. Rothkrantz
Recent years have witnessed a growing interest in recognizing emotions and events based on speech. One of the applications of such systems is automatically detecting when a situations gets out of hand and human intervention is needed. Most studies have focused on increasing recognition accuracies using parts of the same dataset for training and testing. However, this says little about how such a trained system is expected to perform `in the wild'. In this paper we present a cross-corpus study using the audio part of three multimodal datasets containing negative human-human interactions. We present intra- and cross-corpus accuracies whilst manipulating the acoustic features, normalization schemes, and oversampling of the least represented class to alleviate the negative effects of data unbalance. We observe a decrease in performance when disjunct corpora are used for training and testing. Merging two datasets for training results in a slightly lower performance than the best one obtained by using only one corpus for training. A hand crafted low dimensional feature set shows competitive behavior when compared to a brute force high dimensional features vector. Corpus normalization and artificially creating samples of the sparsest class have a positive effect.
近年来,人们对基于言语的情感和事件识别越来越感兴趣。这种系统的应用之一是自动检测情况何时失控,何时需要人工干预。大多数研究都集中在使用相同数据集的部分进行训练和测试来提高识别准确性。然而,这并不能说明这样一个训练有素的系统在“野外”的表现。在本文中,我们提出了一个跨语料库研究,使用三个多模态数据集的音频部分,这些数据集包含负面的人与人之间的互动。我们提出了内部和跨语料库的准确性,同时操纵声学特征,规范化方案和最少代表类的过采样,以减轻数据不平衡的负面影响。我们观察到,当使用分离语料库进行训练和测试时,性能会下降。合并两个数据集进行训练的结果比仅使用一个语料库进行训练获得的最佳性能略低。与蛮力高维特征向量相比,手工制作的低维特征集显示出竞争行为。语料库规范化和人为地创建最稀疏类的样本具有积极的效果。
{"title":"Cross-corpus analysis for acoustic recognition of negative interactions","authors":"I. Lefter, H. Nefs, C. Jonker, L. Rothkrantz","doi":"10.1109/ACII.2015.7344562","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344562","url":null,"abstract":"Recent years have witnessed a growing interest in recognizing emotions and events based on speech. One of the applications of such systems is automatically detecting when a situations gets out of hand and human intervention is needed. Most studies have focused on increasing recognition accuracies using parts of the same dataset for training and testing. However, this says little about how such a trained system is expected to perform `in the wild'. In this paper we present a cross-corpus study using the audio part of three multimodal datasets containing negative human-human interactions. We present intra- and cross-corpus accuracies whilst manipulating the acoustic features, normalization schemes, and oversampling of the least represented class to alleviate the negative effects of data unbalance. We observe a decrease in performance when disjunct corpora are used for training and testing. Merging two datasets for training results in a slightly lower performance than the best one obtained by using only one corpus for training. A hand crafted low dimensional feature set shows competitive behavior when compared to a brute force high dimensional features vector. Corpus normalization and artificially creating samples of the sparsest class have a positive effect.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"48 1","pages":"132-138"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78730588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1