首页 > 最新文献

2015 International Conference on Affective Computing and Intelligent Interaction (ACII)最新文献

英文 中文
How do designers feel textiles? 设计师如何看待纺织品?
B. Petreca, S. Baurley, N. Bianchi-Berthouze
Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.
研究触觉体验是重要而及时的,考虑到这一渠道在人际互动和依赖于它的技术发展方面是如何被利用的,以提高产品和服务的体验。迄今为止,对触觉体验的研究大多是在社会背景下进行的,而对与物体相互作用的触觉体验的理解的研究并不多见。在本文中,我们以纺织品为例研究如何让人们谈论这种体验,并了解在设计支持这种体验的技术时需要考虑哪些重要因素。我们提出了一项定性探索性研究,使用“启发访谈”方法来获得经验过程的第一人称口头描述。我们对来自时尚和纺织领域的6位经验丰富的专业人士进行了初步研究。分析显示,在体验纺织品时,有两种类型的触摸行为,主动和被动,它们通过“主动手”、“被动身体”和“主动工具手”发生。它们可以以任何顺序发生,在纺织品选择过程的3个基于触觉的阶段(“情境”、“模拟”和“刺激”)中具有不同程度的重要性和频率,并且每个阶段的交互作用具有不同的模式。我们讨论这些主题是为了告知纺织品领域情感触摸的技术设计,同时也为了探索一种方法来揭示情感触摸的复杂性及其各种目的。
{"title":"How do designers feel textiles?","authors":"B. Petreca, S. Baurley, N. Bianchi-Berthouze","doi":"10.1109/ACII.2015.7344695","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344695","url":null,"abstract":"Studying tactile experience is important and timely, considering how this channel is being harnessed both in terms of human interaction and for technological developments that rely on it to enhance experience of products and services. Research into tactile experience to date is present mostly within the social context, but there are not many studies on the understanding of tactile experience in interaction with objects. In this paper, we use textiles as a case study to investigate how we can get people to talk about this experience, and to understand what may be important to consider when designing technology to support it. We present a qualitative exploratory study using the `Elicitation Interview' method to obtain a first-person verbal description of experiential processes. We conducted an initial study with 6 experienced professionals from the fashion and textiles area. The analysis revealed that there are two types of touch behaviour in experiencing textiles, active and passive, which happen through `Active hand', `Passive body' and `Active tool-hand'. They can occur in any order, and with different degrees of importance and frequency in the 3 tactile-based phases of the textile selection process - `Situate', `Simulate' and `Stimulate' - and the interaction has different modes in each. We discuss these themes to inform the design of technology for affective touch in the textile field, but also to explore a methodology to uncover the complexity of affective touch and its various purposes.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"982-987"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87246422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Design of a wearable research tool for warm mediated social touches 设计一种可穿戴的研究工具,用于温暖介导的社会接触
Isabel Pfab, Christian J. A. M. Willemse
Social touches are essential in interpersonal communication, for instance to show affect. Despite this importance, mediated interpersonal communication oftentimes lacks the possibility to touch. A human touch is a complex composition of several physical qualities and parameters, but different haptic technologies allow us to isolate such parameters and to investigate their opportunities and limitations for affective communication devices. In our research, we focus on the role that temperature may play in affective mediated communication. In the current paper, we describe the design of a wearable `research tool' that will facilitate systematic research on the possibilities of temperature in affective communication. We present use cases, and define a list of requirements accordingly. Based on a requirement fulfillment analysis, we conclude that our research tool can be of value for research on new forms of affective mediated communication.
社交接触在人际交往中是必不可少的,例如表达情感。尽管有这种重要性,中介人际沟通往往缺乏接触的可能性。人类的触摸是多种物理品质和参数的复杂组合,但不同的触觉技术使我们能够分离这些参数,并研究它们对情感通信设备的机会和限制。在我们的研究中,我们关注温度在情感中介沟通中可能发挥的作用。在当前的论文中,我们描述了一种可穿戴的“研究工具”的设计,这将有助于系统地研究温度在情感交流中的可能性。我们呈现用例,并相应地定义需求列表。基于需求实现分析,我们得出结论,我们的研究工具可以为研究新的情感中介沟通形式提供价值。
{"title":"Design of a wearable research tool for warm mediated social touches","authors":"Isabel Pfab, Christian J. A. M. Willemse","doi":"10.1109/ACII.2015.7344694","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344694","url":null,"abstract":"Social touches are essential in interpersonal communication, for instance to show affect. Despite this importance, mediated interpersonal communication oftentimes lacks the possibility to touch. A human touch is a complex composition of several physical qualities and parameters, but different haptic technologies allow us to isolate such parameters and to investigate their opportunities and limitations for affective communication devices. In our research, we focus on the role that temperature may play in affective mediated communication. In the current paper, we describe the design of a wearable `research tool' that will facilitate systematic research on the possibilities of temperature in affective communication. We present use cases, and define a list of requirements accordingly. Based on a requirement fulfillment analysis, we conclude that our research tool can be of value for research on new forms of affective mediated communication.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"550 1","pages":"976-981"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77140928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Monocular 3D facial information retrieval for automated facial expression analysis 用于自动面部表情分析的单目三维面部信息检索
Meshia Cédric Oveneke, Isabel Gonzalez, Weiyi Wang, D. Jiang, H. Sahli
Understanding social signals is a very important aspect of human communication and interaction and has therefore attracted increased attention from various research areas. Among the different types of social signals, particular attention has been paid to facial expression of emotions and its automated analysis from image sequences. Automated facial expression analysis is a very challenging task due to the complex three-dimensional deformation and motion of the face associated to the facial expressions and the loss of 3D information during the image formation process. As a consequence, retrieving 3D spatio-temporal facial information from image sequences is essential for automated facial expression analysis. In this paper, we propose a framework for retrieving three-dimensional facial structure, motion and spatio-temporal features from monocular image sequences. First, we estimate monocular 3D scene flow by retrieving the facial structure using shape-from-shading (SFS) and combine it with 2D optical flow. Secondly, based on the retrieved structure and motion of the face, we extract spatio-temporal features for automated facial expression analysis. Experimental results illustrate the potential of the proposed 3D facial information retrieval framework for facial expression analysis, i.e. facial expression recognition and facial action-unit recognition on a benchmark dataset. This paves the way for future research on monocular 3D facial expression analysis.
理解社会信号是人类交流和互动的一个非常重要的方面,因此越来越受到各个研究领域的关注。在不同类型的社会信号中,人们特别关注面部情绪表达及其图像序列的自动分析。由于面部表情相关的复杂三维变形和运动以及图像形成过程中三维信息的丢失,自动面部表情分析是一项非常具有挑战性的任务。因此,从图像序列中检索三维时空面部信息对于自动面部表情分析至关重要。本文提出了一种从单眼图像序列中提取三维面部结构、运动和时空特征的框架。首先,利用形状-阴影(shape-from-shading, SFS)提取人脸结构,并将其与二维光流相结合,估计单眼三维场景流;其次,基于检索到的人脸结构和运动特征,提取人脸的时空特征,用于人脸表情自动分析;实验结果说明了所提出的三维面部信息检索框架在面部表情分析方面的潜力,即在基准数据集上的面部表情识别和面部动作单元识别。这为未来单眼三维面部表情分析的研究铺平了道路。
{"title":"Monocular 3D facial information retrieval for automated facial expression analysis","authors":"Meshia Cédric Oveneke, Isabel Gonzalez, Weiyi Wang, D. Jiang, H. Sahli","doi":"10.1109/ACII.2015.7344634","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344634","url":null,"abstract":"Understanding social signals is a very important aspect of human communication and interaction and has therefore attracted increased attention from various research areas. Among the different types of social signals, particular attention has been paid to facial expression of emotions and its automated analysis from image sequences. Automated facial expression analysis is a very challenging task due to the complex three-dimensional deformation and motion of the face associated to the facial expressions and the loss of 3D information during the image formation process. As a consequence, retrieving 3D spatio-temporal facial information from image sequences is essential for automated facial expression analysis. In this paper, we propose a framework for retrieving three-dimensional facial structure, motion and spatio-temporal features from monocular image sequences. First, we estimate monocular 3D scene flow by retrieving the facial structure using shape-from-shading (SFS) and combine it with 2D optical flow. Secondly, based on the retrieved structure and motion of the face, we extract spatio-temporal features for automated facial expression analysis. Experimental results illustrate the potential of the proposed 3D facial information retrieval framework for facial expression analysis, i.e. facial expression recognition and facial action-unit recognition on a benchmark dataset. This paves the way for future research on monocular 3D facial expression analysis.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"15 1","pages":"623-629"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77674028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An ECA expressing appreciations 非洲经委会表示赞赏
Sabrina Campano, Caroline Langlet, N. Glas, C. Clavel, C. Pelachaud
In this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect oIn this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect on user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.n user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.
在本文中,我们提出了一个计算模型,该模型提供了一个具身会话代理(ECA),当与博物馆设置的用户交互时,具有生成口头其他重复的能力(重复之前用户说话时所说的一些单词)。我们关注的是在欣赏句中表达情感立场的其他重复的产生。根据用户的语言输入选择情感立场及其语义特征,并根据这些特征生成ECA的话语。我们通过用户的主观报告对该模型进行了评估。在本文中,我们提出了一个计算模型,该模型提供了一个具身会话代理(ECA)在与博物馆设置的用户交互时产生口头其他重复(重复之前用户说话时所说的一些单词)的能力。我们关注的是在欣赏句中表达情感立场的其他重复的产生。根据用户的语言输入选择情感立场及其语义特征,并根据这些特征生成ECA的话语。我们通过用户的主观报告对该模型进行了评估。结果表明,ECA的情绪立场表达对用户参与有积极影响,并且当ECA发出其他重复时,ECA的行为被用户评为更可信。当ECA发出其他重复时,用户认为ECA的行为更可信。
{"title":"An ECA expressing appreciations","authors":"Sabrina Campano, Caroline Langlet, N. Glas, C. Clavel, C. Pelachaud","doi":"10.1109/ACII.2015.7344691","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344691","url":null,"abstract":"In this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect oIn this paper, we propose a computational model that provides an Embodied Conversational Agent (ECA) with the ability to generate verbal other-repetition (repetitions of some of the words uttered in the previous user speaker turn) when interacting with a user in a museum setting. We focus on the generation of other-repetitions expressing emotional stances in appreciation sentences. Emotional stances and their semantic features are selected according to the user's verbal input, and ECA's utterance is generated according to these features. We present an evaluation of this model through users' subjective reports. Results indicate that the expression of emotional stances by the ECA has a positive effect on user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.n user engagement, and that ECA's behaviours are rated as more believable by users when the ECA utters other-repetitions.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"110 1","pages":"962-967"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85273604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Inducing an ironic effect in automated tweets 在自动推特中产生讽刺效果
A. Valitutti, T. Veale
Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.
讽刺给了我们一种创造性地应对失望的方式。通过允许我们把失败的期望说成是成功的,讽刺强调了我们期望的自然性和它失败的荒谬性。这种戏谑的语言使用的结果是一种微妙的价态转换,因为听众被提醒到所说的和所表达的之间的差距。但讽刺并非没有风险,演讲者通常会小心翼翼地通过语气、肢体语言,或者在Twitter上使用#irony标签来表达讽刺的意图。然而,鉴于讽刺的微妙之处,我们质疑明确标记的有效性,并通过经验证明,在带有更微妙讽刺信号的自动生成的创意推文中,是如何引发更强的效价转移的。
{"title":"Inducing an ironic effect in automated tweets","authors":"A. Valitutti, T. Veale","doi":"10.1109/ACII.2015.7344565","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344565","url":null,"abstract":"Irony gives us a way to react creatively to disappointment. By allowing us to speak of a failed expectation as though it succeeded, irony stresses the naturalness of our expectation and the absurdity of its failure. The result of this playful use of language is a subtle valence shift as listeners are alerted to a gap between what is said and what is meant. But as irony is not without risks, speakers are often careful to signal an ironic intent with tone, body language, or if on Twitter, with the hashtag #irony. Yet given the subtlety of irony, we question the effectiveness of explicit marking, and empirically show how a stronger valence shift can be induced in automatically-generated creative tweets with more nuanced signals of irony.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"36 1","pages":"153-159"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Decoupling facial expressions and head motions in complex emotions 解耦复杂情绪中的面部表情和头部动作
Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson
Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.
通过面部表情和头部运动来感知情绪是心理学和情感计算研究者的兴趣所在。然而,人们对每一种形态的重要性知之甚少,因为它们经常被放在一起而不是分开处理。我们提出了一项研究,分离了头部运动的影响,从面部表情在视频的复杂情绪的感知。我们证明,头部运动携带的情绪信息是互补的,而不是多余的情绪内容在面部表情。最后,我们表明,头部运动中的情感表达并不局限于点头和摇头,其他手势(如头部倾斜、抬起和一般的运动量)可能对自动识别系统有益。
{"title":"Decoupling facial expressions and head motions in complex emotions","authors":"Andra Adams, M. Mahmoud, T. Baltrušaitis, P. Robinson","doi":"10.1109/ACII.2015.7344583","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344583","url":null,"abstract":"Perception of emotion through facial expressions and head motion is of interest to both psychology and affective computing researchers. However, very little is known about the importance of each modality individually, as they are often treated together rather than separately. We present a study which isolates the effect of head motion from facial expression in the perception of complex emotions in videos. We demonstrate that head motions carry emotional information that is complementary rather than redundant to the emotion content in facial expressions. Finally, we show that emotional expressivity in head motion is not limited to nods and shakes and that additional gestures (such as head tilts, raises and general amount of motion) could be beneficial to automated recognition systems.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"21 1","pages":"274-280"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86097278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Region-based image retrieval based on medical media data using ranking and multi-view learning 基于排序和多视图学习的医学媒体数据区域图像检索
Wei Huang, Shuru Zeng, Guang Chen
In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.
本文首次提出了一种基于医学多模态数据的基于区域的图像检索方法,该方法采用排序和多视图学习技术。推导了代理排名评价测度,并基于代理测度进行梯度上升直接优化,实现排名和学习。构建了由1000例真实患者数据组成的数据库,并采用了几种流行的模式识别方法进行性能评价。从统计学的角度来看,我们的新方法在医学图像检索的应用上优于其他方法。
{"title":"Region-based image retrieval based on medical media data using ranking and multi-view learning","authors":"Wei Huang, Shuru Zeng, Guang Chen","doi":"10.1109/ACII.2015.7344672","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344672","url":null,"abstract":"In this study, a novel region-based image retrieval approach via ranking and multi-view learning techniques is introduced for the first time based on medical multi-modality data. A surrogate ranking evaluation measure is derived, and direct optimization via gradient ascent is carried out based on the surrogate measure to realize ranking and learning. A database composed of 1000 real patients data is constructed and several popular pattern recognition methods are implemented for performance evaluation compared with ours. It is suggested that our new method is superior to others in this medical image retrieval utilization from the statistical point of view.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"50 1","pages":"845-850"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86114717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Engagement: A traceable motivational concept in human-robot interaction 参与:人机交互中可追溯的动机概念
Karl Drejing, Serge Thill, Paul E. Hemeren
Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.
参与对于人类之间有意义的社会互动至关重要。了解我们检测他人参与的机制,可以帮助我们理解如何制造能与人类进行社交互动的机器人。然而,目前缺乏可测量的参与结构来构建一个能够可靠地支持人类和机器人之间社会互动的人工系统。本文提出了一个基于动机理论的定义,并概述了一个框架,以探索参与度可以被视为特定行为及其附加幅度或强度的观点。这是通过使用来自多个来源的数据来完成的,例如观察者评级、运动数据、音频和交互结果。我们使用人机交互领域来说明这种方法的应用。该框架进一步提出了一种收集和汇总这些数据的方法。如果某些行为及其附加强度与不同级别的判断参与性同时发生,则可以通过该框架评估参与性,从而使其可用于机器人平台。这个框架可以通过增加注意代理何时以及为什么脱离的能力来提高交互式代理的社交能力,从而为交互式代理提供重新与他或她接触的能力。我们以自闭症谱系障碍儿童的机器人辅助治疗为例,说明并提出验证我们的框架。该框架也代表了一种通用的方法,可以应用于人类和机器人之间的其他社会互动环境,比如与老年人的互动。
{"title":"Engagement: A traceable motivational concept in human-robot interaction","authors":"Karl Drejing, Serge Thill, Paul E. Hemeren","doi":"10.1109/ACII.2015.7344690","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344690","url":null,"abstract":"Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"7 1","pages":"956-961"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86910778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data 基于辅助跨域数据的中文微博文本情感分类改进
Huimin Wu, Qin Jin
Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.
微博文本情感分类在社会保障、商业营销等领域有着广泛的应用。带注释的微博文本数量非常有限。因此,本文研究如何利用其他领域(源领域)的标注数据来改进微博文本(目标领域)的情感分类。迁移学习是一种成功的跨领域学习方法。然而,据我们所知,在应用迁移学习之前,很少有人关注从源域自动选择合适的样本。在本文中,我们提出了一个有效的框架,在迁移学习之前对源域的可用数据进行采样,我们称之为两阶段采样。通过对中文微博文本情感分类的改进,验证了该方法的有效性。
{"title":"Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data","authors":"Huimin Wu, Qin Jin","doi":"10.1109/ACII.2015.7344668","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344668","url":null,"abstract":"Emotion classification for microblog texts has wide applications such as in social security and business marketing areas. The amount of annotated microblog texts is very limited. In this paper, we therefore study how to utilize annotated data from other domains (source domain) to improve emotion classification on microblog texts (target domain). Transfer learning has been a successful approach for cross domain learning. However, to the best of our knowledge, little attention has been paid for automatically selecting the appropriate samples from the source domain before applying transfer learning. In this paper, we propose an effective framework to sampling available data in the source domain before transfer learning, which we name as Two-Stage Sampling. The improvement of emotion classification on Chinese microblog texts demonstrates the effectiveness of our approach.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"75 1","pages":"821-826"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84339285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition
Yelin Kim
My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.
我的博士工作旨在开发从视听行为数据中自动识别情感的计算方法。自动情绪识别的一个主要挑战是人类行为数据非常复杂,因为有多种来源可以改变和调节行为。我的目标是提供计算框架,用于理解和控制与情感产生共同发生的人类行为数据中的多种变异来源,目的是改进自动情感识别系统[1]-[6]。特别是,我的研究旨在通过引入时间分割和时间序列分析技术,为人类视听数据中复杂和随时间变化的行为提供表征、建模和分析方法。本研究通过提高自动情感识别系统的性能和增加对复杂视听数据中嵌入的情感线索的理解,为情感计算社区做出了贡献。
{"title":"Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition","authors":"Yelin Kim","doi":"10.1109/ACII.2015.7344653","DOIUrl":"https://doi.org/10.1109/ACII.2015.7344653","url":null,"abstract":"My PhD work aims at developing computational methodologies for automatic emotion recognition from audiovisual behavioral data. A main challenge in automatic emotion recognition is that human behavioral data are highly complex, due to multiple sources that vary and modulate behaviors. My goal is to provide computational frameworks for understanding and controlling for multiple sources of variation in human behavioral data that co-occur with the production of emotion, with the aim of improving automatic emotion recognition systems [1]-[6]. In particular, my research aims at providing representation, modeling, and analysis methods for complex and time-changing behaviors in human audio-visual data by introducing temporal segmentation and time-series analysis techniques. This research contributes to the affective computing community by improving the performance of automatic emotion recognition systems and increasing the understanding of affective cues embedded within complex audio-visual data.","PeriodicalId":6863,"journal":{"name":"2015 International Conference on Affective Computing and Intelligent Interaction (ACII)","volume":"16 1","pages":"748-753"},"PeriodicalIF":0.0,"publicationDate":"2015-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84193534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2015 International Conference on Affective Computing and Intelligent Interaction (ACII)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1