首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
How do designers feel textiles? 设计师如何看待纺织品?
B. Petreca, S. Baurley, N. Bianchi-Berthouze
Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.
研究触觉体验是重要而及时的,考虑到这一渠道在人际互动和依赖于它的技术发展方面是如何被利用的,以提高产品和服务的体验。迄今为止,对触觉体验的研究大多是在社会背景下进行的,而对与物体相互作用的触觉体验的理解的研究并不多见。在本文中,我们以纺织品为例研究如何让人们谈论这种体验,并了解在设计支持这种体验的技术时需要考虑哪些重要因素。我们提出了一项定性探索性研究,使用“启发访谈”方法来获得经验过程的第一人称口头描述。我们对来自时尚和纺织领域的6位经验丰富的专业人士进行了初步研究。分析显示,在体验纺织品时,有两种类型的触摸行为,主动和被动,它们通过“主动手”、“被动身体”和“主动工具手”发生。它们可以以任何顺序发生,在纺织品选择过程的3个基于触觉的阶段(“情境”、“模拟”和“刺激”)中具有不同程度的重要性和频率,并且每个阶段的交互作用具有不同的模式。我们讨论这些主题是为了告知纺织品领域情感触摸的技术设计,同时也为了探索一种方法来揭示情感触摸的复杂性及其各种目的。
{"title":"How do designers feel textiles?","authors":"B. Petreca, S. Baurley, N. Bianchi-Berthouze","doi":"10.1109/ACII.2015.7344695","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344695","url":null,"abstract":"Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"982-987"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87246422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Decoupling facial expressions and head motions in complex emotions 解耦复杂情绪中的面部表情和头部动作
Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson
Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.
通过面部表情和头部运动来感知情绪是心理学和情感计算研究者的兴趣所在。然而,人们对每一种形态的重要性知之甚少,因为它们经常被放在一起而不是分开处理。我们提出了一项研究,分离了头部运动的影响,从面部表情在视频的复杂情绪的感知。我们证明,头部运动携带的情绪信息是互补的,而不是多余的情绪内容在面部表情。最后,我们表明,头部运动中的情感表达并不局限于点头和摇头,其他手势(如头部倾斜、抬起和一般的运动量)可能对自动识别系统有益。
{"title":"Decoupling facial expressions and head motions in complex emotions","authors":"Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson","doi":"10.1109/ACII.2015.7344583","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344583","url":null,"abstract":"Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"274-280"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86097278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Region-based image retrieval based on medical media data using ranking and multi-view learning 基于排序和多视图学习的医学媒体数据区域图像检索
Wei Huang, Shuru Zeng, Guang Chen
In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.
本文首次提出了一种基于医学多模态数据的基于区域的图像检索方法,该方法采用排序和多视图学习技术。推导了代理排名评价测度,并基于代理测度进行梯度上升直接优化,实现排名和学习。构建了由1000例真实患者数据组成的数据库,并采用了几种流行的模式识别方法进行性能评价。从统计学的角度来看,我们的新方法在医学图像检索的应用上优于其他方法。
{"title":"Region-based image retrieval based on medical media data using ranking and multi-view learning","authors":"Wei Huang, Shuru Zeng, Guang Chen","doi":"10.1109/ACII.2015.7344672","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344672","url":null,"abstract":"In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"50 1","pages":"845-850"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Inducing an ironic effect in automated tweets 在自动推特中产生讽刺效果
A. Valitutti, T. Veale
Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.
讽刺给了我们一种创造性地应对失望的方式。通过允许我们把失败的期望说成是成功的,讽刺强调了我们期望的自然性和它失败的荒谬性。这种戏谑的语言使用的结果是一种微妙的价态转换,因为听众被提醒到所说的和所表达的之间的差距。但讽刺并非没有风险,演讲者通常会小心翼翼地通过语气、肢体语言,或者在Twitter上使用#irony标签来表达讽刺的意图。然而,鉴于讽刺的微妙之处,我们质疑明确标记的有效性,并通过经验证明,在带有更微妙讽刺信号的自动生成的创意推文中,是如何引发更强的效价转移的。
{"title":"Inducing an ironic effect in automated tweets","authors":"A. Valitutti, T. Veale","doi":"10.1109/ACII.2015.7344565","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344565","url":null,"abstract":"Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"153-159"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An experimental study of speech emotion recognition based on deep convolutional neural networks 基于深度卷积神经网络的语音情感识别实验研究
W. Zheng, Jian Yu, Yuexian Zou
Speech emotion recognition (SER) is a challenging task since it is unclear what kind of features are able to reflect the characteristics of human emotion from speech. However, traditional feature extractions perform inconsistently for different emotion recognition tasks. Obviously, different spectrogram provides information reflecting difference emotion. This paper proposes a systematical approach to implement an effectively emotion recognition system based on deep convolution neural networks (DCNNs) using labeled training audio data. Specifically, the log-spectrogram is computed and the principle component analysis (PCA) technique is used to reduce the dimensionality and suppress the interferences. Then the PCA whitened spectrogram is split into non-overlapping segments. The DCNN is constructed to learn the representation of the emotion from the segments with labeled training speech data. Our preliminary experiments show the proposed emotion recognition system based on DCNNs (containing 2 convolution and 2 pooling layers) achieves about 40% classification accuracy. Moreover, it also outperforms the SVM based classification using the hand-crafted acoustic features.
语音情感识别(SER)是一项具有挑战性的任务,因为人们不清楚什么样的特征能够从语音中反映出人类情感的特征。然而,传统的特征提取方法在不同的情感识别任务中表现不一致。显然,不同的谱图提供了反映不同情绪的信息。本文提出了一种基于深度卷积神经网络(DCNNs)的基于标记训练音频数据的有效情感识别系统的系统方法。具体而言,计算对数谱图,并采用主成分分析(PCA)技术进行降维和抑制干扰。然后将PCA白化后的谱图分割成互不重叠的段。构建DCNN是为了从带有标记的训练语音数据片段中学习情感的表示。我们的初步实验表明,基于DCNNs(包含2个卷积和2个池化层)的情绪识别系统的分类准确率约为40%。此外,它还优于使用手工声学特征的基于支持向量机的分类。
{"title":"An experimental study of speech emotion recognition based on deep convolutional neural networks","authors":"W. Zheng, Jian Yu, Yuexian Zou","doi":"10.1109/ACII.2015.7344669","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344669","url":null,"abstract":"Speech emotion recognition (SER) is a challenging task since it is unclear what kind of features are able to reflect the characteristics of human emotion from speech. However, traditional feature extractions perform inconsistently for different emotion recognition tasks. Obviously, different spectrogram provides information reflecting difference emotion. This paper proposes a systematical approach to implement an effectively emotion recognition system based on deep convolution neural networks (DCNNs) using labeled training audio data. Specifically, the log-spectrogram is computed and the principle component analysis (PCA) technique is used to reduce the dimensionality and suppress the interferences. Then the PCA whitened spectrogram is split into non-overlapping segments. The DCNN is constructed to learn the representation of the emotion from the segments with labeled training speech data. Our preliminary experiments show the proposed emotion recognition system based on DCNNs (containing 2 convolution and 2 pooling layers) achieves about 40% classification accuracy. Moreover, it also outperforms the SVM based classification using the hand-crafted acoustic features.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"19 1","pages":"827-831"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84317516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 145
Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data 基于辅助跨域数据的中文微博文本情感分类改进
Huimin Wu, Qin Jin
Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.
微博文本情感分类在社会保障、商业营销等领域有着广泛的应用。带注释的微博文本数量非常有限。因此,本文研究如何利用其他领域(源领域)的标注数据来改进微博文本(目标领域)的情感分类。迁移学习是一种成功的跨领域学习方法。然而,据我们所知,在应用迁移学习之前,很少有人关注从源域自动选择合适的样本。在本文中,我们提出了一个有效的框架,在迁移学习之前对源域的可用数据进行采样,我们称之为两阶段采样。通过对中文微博文本情感分类的改进,验证了该方法的有效性。
{"title":"Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data","authors":"Huimin Wu, Qin Jin","doi":"10.1109/ACII.2015.7344668","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344668","url":null,"abstract":"Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"75 1","pages":"821-826"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84339285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Engagement: A traceable motivational concept in human-robot interaction 参与:人机交互中可追溯的动机概念
Karl Drejing, Serge Thill, Paul E. Hemeren
Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.
参与对于人类之间有意义的社会互动至关重要。了解我们检测他人参与的机制,可以帮助我们理解如何制造能与人类进行社交互动的机器人。然而,目前缺乏可测量的参与结构来构建一个能够可靠地支持人类和机器人之间社会互动的人工系统。本文提出了一个基于动机理论的定义,并概述了一个框架,以探索参与度可以被视为特定行为及其附加幅度或强度的观点。这是通过使用来自多个来源的数据来完成的,例如观察者评级、运动数据、音频和交互结果。我们使用人机交互领域来说明这种方法的应用。该框架进一步提出了一种收集和汇总这些数据的方法。如果某些行为及其附加强度与不同级别的判断参与性同时发生,则可以通过该框架评估参与性,从而使其可用于机器人平台。这个框架可以通过增加注意代理何时以及为什么脱离的能力来提高交互式代理的社交能力,从而为交互式代理提供重新与他或她接触的能力。我们以自闭症谱系障碍儿童的机器人辅助治疗为例,说明并提出验证我们的框架。该框架也代表了一种通用的方法,可以应用于人类和机器人之间的其他社会互动环境,比如与老年人的互动。
{"title":"Engagement: A traceable motivational concept in human-robot interaction","authors":"Karl Drejing, Serge Thill, Paul E. Hemeren","doi":"10.1109/ACII.2015.7344690","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344690","url":null,"abstract":"Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"956-961"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86910778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Estimate the intimacy of the characters based on their emotional states for application to non-task dialogue 根据角色的情绪状态来评估他们的亲密度,以便应用于非任务对话
Kazuyuki Matsumoto, Kyosuke Akita, Minoru Yoshida, K. Kita, F. Ren
Recently, a portable digital device equipped with voice guidance has been widely used with increasing the demand for the usability-conscious dialogue system. One of the problems with the existing dialogue system is its immature application to non-task dialogue. Non-task-oriented dialogue requires some schemes that enable smooth and flexible conversations with a user. For example, it would be possible to go beyond the closed relationship between the system and the user by considering the user's relationship with others in real life. In this paper, we focused on the dialogue made by the two characters in a drama scenario, and tried to express their relationship with a scale of “intimacy degree.” There will be such various elements related to the intimacy degree as the frequency of response to the utterance and the attitude of a speaker during the dialogue. We focused on the emotional state of the speaker during the utterance and tried to realize intimacy estimation with higher accuracy. As the evaluation result, we achieved higher accuracy in intimacy estimation than the existing method based on speech role.
近年来,一种带有语音引导功能的便携式数字设备得到了广泛的应用,人们对对话系统的可用性要求也越来越高。现有对话系统存在的问题之一是对非任务对话的应用不成熟。非面向任务的对话需要一些方案来实现与用户的流畅和灵活的对话。例如,通过考虑用户在现实生活中与其他人的关系,可以超越系统与用户之间的紧密关系。本文以一个戏剧场景中两个人物的对话为研究对象,试图用“亲密度”的尺度来表达他们之间的关系。在对话过程中,与亲密度相关的因素包括对话语的回应频率、说话人的态度等。我们关注说话人在说话过程中的情绪状态,试图以更高的准确率实现亲密度的估计。评价结果表明,我们在亲密度估计上取得了比现有基于语音角色的方法更高的准确性。
{"title":"Estimate the intimacy of the characters based on their emotional states for application to non-task dialogue","authors":"Kazuyuki Matsumoto, Kyosuke Akita, Minoru Yoshida, K. Kita, F. Ren","doi":"10.1109/ACII.2015.7344591","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344591","url":null,"abstract":"Recently, a portable digital device equipped with voice guidance has been widely used with increasing the demand for the usability-conscious dialogue system. One of the problems with the existing dialogue system is its immature application to non-task dialogue. Non-task-oriented dialogue requires some schemes that enable smooth and flexible conversations with a user. For example, it would be possible to go beyond the closed relationship between the system and the user by considering the user's relationship with others in real life. In this paper, we focused on the dialogue made by the two characters in a drama scenario, and tried to express their relationship with a scale of “intimacy degree.” There will be such various elements related to the intimacy degree as the frequency of response to the utterance and the attitude of a speaker during the dialogue. We focused on the emotional state of the speaker during the utterance and tried to realize intimacy estimation with higher accuracy. As the evaluation result, we achieved higher accuracy in intimacy estimation than the existing method based on speech role.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"66 1","pages":"327-333"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85614794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
Yelin Kim
My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.
我的博士工作旨在开发从视听行为数据中自动识别情感的计算方法。自动情绪识别的一个主要挑战是人类行为数据非常复杂,因为有多种来源可以改变和调节行为。我的目标是提供计算框架,用于理解和控制与情感产生共同发生的人类行为数据中的多种变异来源,目的是改进自动情感识别系统[1]-[6]。特别是,我的研究旨在通过引入时间分割和时间序列分析技术,为人类视听数据中复杂和随时间变化的行为提供表征、建模和分析方法。本研究通过提高自动情感识别系统的性能和增加对复杂视听数据中嵌入的情感线索的理解,为情感计算社区做出了贡献。
{"title":"Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition","authors":"Yelin Kim","doi":"10.1109/ACII.2015.7344653","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344653","url":null,"abstract":"My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"16 1","pages":"748-753"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84193534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation 贝尔法斯特讲故事数据库:一个以笑声为中心的自发社会互动数据库
G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, E. André
To support the endeavor of creating intelligent interfaces between computers and humans the use of training materials based on realistic human-human interactions has been recognized as a crucial task. One of the effects of the creation of these databases is an increased realization of the importance of often overlooked social signals and behaviours in organizing and orchestrating our interactions. Laughter is one of these key social signals; its importance in maintaining the smooth flow of human interaction has only recently become apparent in the embodied conversational agent domain. In turn, these realizations require training data that focus on these key social signals. This paper presents a database that is well annotated and theoretically constructed with respect to understanding laughter as it is used within human social interaction. Its construction, motivation, annotation and availability are presented in detail in this paper.
为了支持在计算机和人类之间创建智能接口的努力,基于现实的人机交互的培训材料的使用已被认为是一项至关重要的任务。创建这些数据库的影响之一是,人们越来越意识到,在组织和协调我们的互动过程中,经常被忽视的社会信号和行为的重要性。笑是这些关键的社交信号之一;它在维持人类交互的流畅性方面的重要性直到最近才在具体化的会话代理领域显现出来。反过来,这些实现需要关注这些关键社会信号的训练数据。这篇论文提出了一个数据库,在理解人类社会互动中使用的笑声方面进行了很好的注释和理论构建。本文详细介绍了它的结构、动机、注释和可用性。
{"title":"The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation","authors":"G. McKeown, W. Curran, J. Wagner, F. Lingenfelser, E. André","doi":"10.1109/ACII.2015.7344567","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344567","url":null,"abstract":"To support the endeavor of creating intelligent interfaces between computers and humans the use of training materials based on realistic human-human interactions has been recognized as a crucial task. One of the effects of the creation of these databases is an increased realization of the importance of often overlooked social signals and behaviours in organizing and orchestrating our interactions. Laughter is one of these key social signals; its importance in maintaining the smooth flow of human interaction has only recently become apparent in the embodied conversational agent domain. In turn, these realizations require training data that focus on these key social signals. This paper presents a database that is well annotated and theoretically constructed with respect to understanding laughter as it is used within human social interaction. Its construction, motivation, annotation and availability are presented in detail in this paper.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"30 1","pages":"166-172"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78146251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1