首页 > 最新文献

2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

英文 中文
Eliciting Confusion in Online Conversational Tasks 在在线会话任务中引起困惑
Nikhil Kaushik, Reynold Bailey, Alexander Ororbia, Cecilia Ovesdotter Alm
Confusion is a complex affective experience that involves both emotional and cognitive components, being less conspicuous than core emotions such as anger or sadness. We discuss an online data collection study designed to elicit confusion in spontaneous conversations across two dialogue tasks. Results from an analysis of the multimodal data (transcribed spoken language and facial expressions) suggest that the tasks induced naturalistic confusion, towards automated confusion recognition.
困惑是一种复杂的情感体验,涉及情感和认知两方面,不像愤怒或悲伤等核心情绪那么明显。我们讨论了一项在线数据收集研究,旨在引起两种对话任务中自发对话的混淆。对多模态数据(转录的口语和面部表情)的分析结果表明,这些任务导致了自然的混乱,走向自动的混乱识别。
{"title":"Eliciting Confusion in Online Conversational Tasks","authors":"Nikhil Kaushik, Reynold Bailey, Alexander Ororbia, Cecilia Ovesdotter Alm","doi":"10.1109/aciiw52867.2021.9666351","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666351","url":null,"abstract":"Confusion is a complex affective experience that involves both emotional and cognitive components, being less conspicuous than core emotions such as anger or sadness. We discuss an online data collection study designed to elicit confusion in spontaneous conversations across two dialogue tasks. Results from an analysis of the multimodal data (transcribed spoken language and facial expressions) suggest that the tasks induced naturalistic confusion, towards automated confusion recognition.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132552490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Multimodal Transformers in Affective Computing 多模态变压器在情感计算中的应用
Juan Vazquez-Rodriguez
Having devices capable of understanding human emotions will significantly improve the way people interact with them. Moreover, if those devices are capable of influencing the emotions of users in a positive way, this will improve their quality of life, especially for frail or dependent users. A first step towards this goal is improving the performance of emotion recognition systems. Specifically, using a multimodal approach is appealing, as the availability of different signals is growing. We believe that it is important to incorporate new architectures and techniques like the Transformer and BERT, and to investigate how to use them in a multimodal setting. Also, it is essential to develop self-supervised learning techniques to take advantage of the considerable quantity of unlabeled data available nowadays. In this extended abstract, we present our research in those directions.
拥有能够理解人类情感的设备将显著改善人们与它们互动的方式。此外,如果这些设备能够以积极的方式影响用户的情绪,这将改善他们的生活质量,特别是对身体虚弱或依赖的用户。实现这一目标的第一步是提高情绪识别系统的性能。具体来说,随着不同信号的可用性不断增加,使用多模态方法很有吸引力。我们认为合并新的架构和技术,如Transformer和BERT,并研究如何在多模式环境中使用它们是很重要的。此外,开发自我监督学习技术以利用目前可用的大量未标记数据是至关重要的。在这篇扩展摘要中,我们介绍了我们在这些方向上的研究。
{"title":"Using Multimodal Transformers in Affective Computing","authors":"Juan Vazquez-Rodriguez","doi":"10.1109/aciiw52867.2021.9666396","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666396","url":null,"abstract":"Having devices capable of understanding human emotions will significantly improve the way people interact with them. Moreover, if those devices are capable of influencing the emotions of users in a positive way, this will improve their quality of life, especially for frail or dependent users. A first step towards this goal is improving the performance of emotion recognition systems. Specifically, using a multimodal approach is appealing, as the availability of different signals is growing. We believe that it is important to incorporate new architectures and techniques like the Transformer and BERT, and to investigate how to use them in a multimodal setting. Also, it is essential to develop self-supervised learning techniques to take advantage of the considerable quantity of unlabeled data available nowadays. In this extended abstract, we present our research in those directions.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116728641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Head and Body Movement Patterns in Naturalistic Human-Machine Interaction 自然人机交互中头部和身体运动模式的比较
Jannes Bützer, Ronald Böck
This paper aims on the investigation and recognition of upper-body movements during a naturalistic Human-Machine Interaction, in which humans interact with a technical system while sitting in front of it. Therefore, we focus on the Last Minute Corpus, that provides such a naturalistic scenario in combination with multimodal recordings. For feature extraction an approach called Probabilistic Breadth Features was used, allowing a condensed investigation of movement patterns. Finally, the classification was based on Extreme Learning Machines, comparing features obtained in three different conditions: the Kinect's spine point, head point, and a combination of both. In context of this naturalistic interaction setting, a mean accuracy of 86.1% was achieved.
本文旨在研究人类坐在技术系统前与之交互的自然式人机交互过程中上半身动作的研究与识别。因此,我们将重点放在最后一分钟语料库上,它提供了与多模态记录相结合的自然场景。对于特征提取,使用了一种称为概率宽度特征的方法,允许对运动模式进行浓缩调查。最后,分类是基于极限学习机,比较在三种不同情况下获得的特征:Kinect的脊柱点,头部点,以及两者的组合。在这种自然交互设置的背景下,平均准确率达到86.1%。
{"title":"Comparison of Head and Body Movement Patterns in Naturalistic Human-Machine Interaction","authors":"Jannes Bützer, Ronald Böck","doi":"10.1109/ACIIW52867.2021.9666244","DOIUrl":"https://doi.org/10.1109/ACIIW52867.2021.9666244","url":null,"abstract":"This paper aims on the investigation and recognition of upper-body movements during a naturalistic Human-Machine Interaction, in which humans interact with a technical system while sitting in front of it. Therefore, we focus on the Last Minute Corpus, that provides such a naturalistic scenario in combination with multimodal recordings. For feature extraction an approach called Probabilistic Breadth Features was used, allowing a condensed investigation of movement patterns. Finally, the classification was based on Extreme Learning Machines, comparing features obtained in three different conditions: the Kinect's spine point, head point, and a combination of both. In context of this naturalistic interaction setting, a mean accuracy of 86.1% was achieved.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117135506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unbiased Mimic Activity Evaluation: F2F Emotion Studio Software 无偏模拟活动评估:F2F情感工作室软件
M. Baev, A. Gusev, A. Kremlev
We developed a research software which allows users to accurately detect FACS AUs and basic emotion expressions. This software was developed as a comprehensive FACS based measurement tool. Due to their inherent limitations we don't use any kind of neural network facial expression classification. We created five author's computer vision procedures and a set of logical rules to detect 18 AUs and seven basic emotion expressions. The software could evaluate both macro- and microexpressions. As evaluation results we provided three examples of analyzing data taken from the SAMM and CASME II databases using F2F Emotion Studio software.
我们开发了一个研究软件,可以让用户准确地检测FACS au和基本的情绪表达。该软件是作为一个全面的基于FACS的测量工具而开发的。由于其固有的局限性,我们没有使用任何类型的神经网络面部表情分类。我们创建了5个作者的计算机视觉程序和一套逻辑规则来检测18个au和7种基本的情感表达。该软件可以评估宏表情和微表情。作为评估结果,我们提供了三个使用F2F Emotion Studio软件分析SAMM和CASME II数据库数据的示例。
{"title":"Unbiased Mimic Activity Evaluation: F2F Emotion Studio Software","authors":"M. Baev, A. Gusev, A. Kremlev","doi":"10.1109/aciiw52867.2021.9666319","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666319","url":null,"abstract":"We developed a research software which allows users to accurately detect FACS AUs and basic emotion expressions. This software was developed as a comprehensive FACS based measurement tool. Due to their inherent limitations we don't use any kind of neural network facial expression classification. We created five author's computer vision procedures and a set of logical rules to detect 18 AUs and seven basic emotion expressions. The software could evaluate both macro- and microexpressions. As evaluation results we provided three examples of analyzing data taken from the SAMM and CASME II databases using F2F Emotion Studio software.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114705944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fantastic Ideas and Where to Find Them: Elevating Creativity in Self-organizing Social Networks 奇妙的想法和在哪里找到它们:在自组织的社会网络中提升创造力
Raiyan Abdul Baten
As the prevalence of automation increases, creativity will play an ever-larger role in the tasks humans accomplish. In this dissertation, we first explore empirically how a social net-work's connectivity patterns and creative outcomes are affected by factors such as creative performance, popularity, and identity attributes of people. Accordingly, we seek to devise intelligent intervention approaches that can harness the empirical insights to optimize network-wide creative outcomes. We envision our work to inform not only managerial and algorithmic decision-making, but also public policy as it relates to helping humans become more creatively productive in a social network.
随着自动化的普及,创造力将在人类完成的任务中发挥越来越大的作用。在本文中,我们首先从实证角度探讨了社交网络的连接模式和创造性成果是如何受到创造性表现、受欢迎程度和人们的身份属性等因素的影响的。因此,我们寻求设计智能干预方法,利用经验见解来优化整个网络的创造性成果。我们设想我们的工作不仅可以为管理和算法决策提供信息,还可以为公共政策提供信息,因为它与帮助人类在社交网络中变得更具创造性和生产力有关。
{"title":"Fantastic Ideas and Where to Find Them: Elevating Creativity in Self-organizing Social Networks","authors":"Raiyan Abdul Baten","doi":"10.1109/aciiw52867.2021.9666357","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666357","url":null,"abstract":"As the prevalence of automation increases, creativity will play an ever-larger role in the tasks humans accomplish. In this dissertation, we first explore empirically how a social net-work's connectivity patterns and creative outcomes are affected by factors such as creative performance, popularity, and identity attributes of people. Accordingly, we seek to devise intelligent intervention approaches that can harness the empirical insights to optimize network-wide creative outcomes. We envision our work to inform not only managerial and algorithmic decision-making, but also public policy as it relates to helping humans become more creatively productive in a social network.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123462762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Shame Signals: Functions of Smile and Laughter in the Context of Shame 理解羞耻信号:微笑和大笑在羞耻情境中的作用
Mirella Hladký, T. Schneeberger, Patrick Gebhard
Computational emotion recognition focuses on observable expressions. In the case of highly unpleasant emotions that are rarely displayed openly and mostly unconsciously regulated - such as shame - this approach can be difficult. In previous studies, we found participants to smile and laugh while experiencing shame. Most current approaches interpret smiles and laughter as signals of enjoyment. They neglect the internal emotional experience and the complexity of social signals. We present a planned mixed-methods study that will investigate underlying functions of smiles and laughter in shameful situations and how those reflect in the morphology of expression. Participants' smiles and laughter during shame-eliciting situations will be analyzed using behavioral observations. Semi-structured interviews will investigate their functions. The gained knowledge can improve computational emotion recognition and avoid misinterpretations of smiles and laughter. In the scope of the open science initiative, we describe the planned study in detail with its research questions, hypotheses, design, methods, and analyses.
计算情感识别侧重于可观察的表情。对于那些很少公开表现出来,而且大多是无意识地控制的非常不愉快的情绪——比如羞耻——这种方法可能会很困难。在之前的研究中,我们发现参与者在经历羞耻时会微笑和大笑。目前大多数方法将微笑和大笑解释为享受的信号。他们忽视了内在的情感体验和社会信号的复杂性。我们提出了一项计划的混合方法研究,将调查在羞耻的情况下微笑和大笑的潜在功能,以及这些功能如何反映在表情形态上。参与者在引起羞耻的情况下的微笑和笑声将通过行为观察来分析。半结构化访谈将调查他们的职能。获得的知识可以提高计算情感识别,避免对微笑和笑声的误解。在开放科学倡议的范围内,我们详细描述了计划中的研究,包括研究问题、假设、设计、方法和分析。
{"title":"Understanding Shame Signals: Functions of Smile and Laughter in the Context of Shame","authors":"Mirella Hladký, T. Schneeberger, Patrick Gebhard","doi":"10.1109/aciiw52867.2021.9666424","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666424","url":null,"abstract":"Computational emotion recognition focuses on observable expressions. In the case of highly unpleasant emotions that are rarely displayed openly and mostly unconsciously regulated - such as shame - this approach can be difficult. In previous studies, we found participants to smile and laugh while experiencing shame. Most current approaches interpret smiles and laughter as signals of enjoyment. They neglect the internal emotional experience and the complexity of social signals. We present a planned mixed-methods study that will investigate underlying functions of smiles and laughter in shameful situations and how those reflect in the morphology of expression. Participants' smiles and laughter during shame-eliciting situations will be analyzed using behavioral observations. Semi-structured interviews will investigate their functions. The gained knowledge can improve computational emotion recognition and avoid misinterpretations of smiles and laughter. In the scope of the open science initiative, we describe the planned study in detail with its research questions, hypotheses, design, methods, and analyses.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131319608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The AffectMove Challenge: some machine learning approaches AffectMove挑战:一些机器学习方法
G. Dray, Pierre-Antoine Jean, Yann Maheu, J. Montmain, Nicolas Sutton-Charani
This paper describes some machine learning methods that we have implemented to participate in the AffectMove challenge which aims to develop technologies for classification of body movements in the areas of physical rehabilitation of chronic pain, mathematical problem solving and interactive dance contexts. The methods and results obtained are presented as well as some futureworks.
本文描述了我们为参与AffectMove挑战而实施的一些机器学习方法,该挑战旨在开发用于慢性疼痛物理康复、数学问题解决和互动舞蹈环境中身体动作分类的技术。介绍了方法和结果,并对今后的工作进行了展望。
{"title":"The AffectMove Challenge: some machine learning approaches","authors":"G. Dray, Pierre-Antoine Jean, Yann Maheu, J. Montmain, Nicolas Sutton-Charani","doi":"10.1109/aciiw52867.2021.9666318","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666318","url":null,"abstract":"This paper describes some machine learning methods that we have implemented to participate in the AffectMove challenge which aims to develop technologies for classification of body movements in the areas of physical rehabilitation of chronic pain, mathematical problem solving and interactive dance contexts. The methods and results obtained are presented as well as some futureworks.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115822944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Understanding Confusion and Affective States Under Communication Failures in Voice-Based Human-Machine Interaction 理解基于语音的人机交互中沟通失败下的困惑和情感状态
Sujeong Kim, Abhinav Garlapati, Jonah Lubin, Amir Tamrakar, Ajay Divakaran
We present a series of two studies conducted to understand user's affective states during voice-based human-machine interactions. Emphasis is placed on the cases of communication errors or failures. In particular, we are interested in understanding “confusion” in relation with other affective states. The studies consist of two types of tasks: (1) related to communication with a voice-based virtual agent: speaking to the machine and understanding what the machine says, (2) non-communication related, problem-solving tasks where the participants solve puzzles and riddles but are asked to verbally explain the answers to the machine. We collected audio-visual data and self-reports of affective states of the participants. We report results of two studies and analysis of the collected data. The first study was analyzed based on the annotator's observation, and the second study was analyzed based on the self-report.
我们提出了一系列的两项研究,以了解用户在基于语音的人机交互中的情感状态。重点放在沟通错误或失败的情况下。特别是,我们感兴趣的是理解“困惑”与其他情感状态的关系。这些研究包括两种类型的任务:(1)与基于语音的虚拟代理的通信相关:与机器说话并理解机器所说的内容;(2)与通信无关,解决问题的任务,参与者解决谜题和谜语,但被要求口头向机器解释答案。我们收集了参与者情感状态的视听数据和自我报告。我们报告了两项研究的结果和对收集数据的分析。第一项研究基于注释者的观察进行分析,第二项研究基于自我报告进行分析。
{"title":"Towards Understanding Confusion and Affective States Under Communication Failures in Voice-Based Human-Machine Interaction","authors":"Sujeong Kim, Abhinav Garlapati, Jonah Lubin, Amir Tamrakar, Ajay Divakaran","doi":"10.1109/ACIIW52867.2021.9666238","DOIUrl":"https://doi.org/10.1109/ACIIW52867.2021.9666238","url":null,"abstract":"We present a series of two studies conducted to understand user's affective states during voice-based human-machine interactions. Emphasis is placed on the cases of communication errors or failures. In particular, we are interested in understanding “confusion” in relation with other affective states. The studies consist of two types of tasks: (1) related to communication with a voice-based virtual agent: speaking to the machine and understanding what the machine says, (2) non-communication related, problem-solving tasks where the participants solve puzzles and riddles but are asked to verbally explain the answers to the machine. We collected audio-visual data and self-reports of affective states of the participants. We report results of two studies and analysis of the collected data. The first study was analyzed based on the annotator's observation, and the second study was analyzed based on the self-report.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130308257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Economic and Social Consequences of Anger and Gender in Computer-Mediated Negotiations: Is there a Backlash Against Angry Females? 计算机调解谈判中愤怒和性别的经济和社会后果:是否存在对愤怒女性的反弹?
Janet Wessler
This experimental research investigated if there are economic and social sanctions (i.e., backlash) for counter-stereotypically behaving, angry females in computer-mediated negotiations. Participants (N = 82) received angry or joyful chat messages from their ostensible male or female opposite (i.e., a computer program). Results confirm the well-known anger effect: Participants demanded lower points for themselves when negotiating with an angry vs. joyful opposite. Moreover, participants liked the angry opposite less, perceived them as less competent and more competitive. However, the opposite's gender did not moderate these findings, although exploratory evidence for a backlash effect emerged: Angry females had descriptively lower negotiation outcomes, were liked less and perceived as significantly more competitive than angry males. These results suggest that when studying negotiations in human-agent interactions, both emotions and gender should be considered as important factors driving negotiation results and social perceptions of the agent.
这项实验研究调查了在电脑调解的谈判中,对反刻板印象行为的愤怒女性是否存在经济和社会制裁(即反弹)。参与者(N = 82)从他们表面上的异性(即计算机程序)那里收到愤怒或快乐的聊天信息。结果证实了众所周知的愤怒效应:参与者在与愤怒的对手和快乐的对手谈判时,要求自己得更低的分。此外,参与者更不喜欢愤怒的人,认为他们能力较差,更有竞争力。然而,异性并没有缓和这些发现,尽管出现了反冲效应的探索性证据:愤怒的女性在谈判结果上更低,更不受欢迎,被认为比愤怒的男性更具竞争力。这些结果表明,在研究人类与代理人互动中的谈判时,情绪和性别都应被视为驱动谈判结果和代理人社会感知的重要因素。
{"title":"Economic and Social Consequences of Anger and Gender in Computer-Mediated Negotiations: Is there a Backlash Against Angry Females?","authors":"Janet Wessler","doi":"10.1109/aciiw52867.2021.9666337","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666337","url":null,"abstract":"This experimental research investigated if there are economic and social sanctions (i.e., backlash) for counter-stereotypically behaving, angry females in computer-mediated negotiations. Participants (N = 82) received angry or joyful chat messages from their ostensible male or female opposite (i.e., a computer program). Results confirm the well-known anger effect: Participants demanded lower points for themselves when negotiating with an angry vs. joyful opposite. Moreover, participants liked the angry opposite less, perceived them as less competent and more competitive. However, the opposite's gender did not moderate these findings, although exploratory evidence for a backlash effect emerged: Angry females had descriptively lower negotiation outcomes, were liked less and perceived as significantly more competitive than angry males. These results suggest that when studying negotiations in human-agent interactions, both emotions and gender should be considered as important factors driving negotiation results and social perceptions of the agent.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Temporal conditional Wasserstein GANs for audio-visual affect-related ties 视听影响相关关系的时间条件Wasserstein gan
C. Athanasiadis, E. Hortal, Stelios Asteriadis
Emotion recognition through audio is a rather challenging task that entails proper feature extraction and classification. Meanwhile, state-of-the-art classification strategies are usually based on deep learning architectures. Training complex deep learning networks normally requires very large audiovisual corpora with available emotion annotations. However, such availability is not always guaranteed since harvesting and annotating such datasets is a time-consuming task. In this work, temporal conditional Wasserstein Generative Adversarial Networks (tc-wGANs) are introduced to generate robust audio data by leveraging information from a face modality. Having as input temporal facial features extracted using a dynamic deep learning architecture (based on 3dCNN, LSTM and Transformer networks) and, additionally, conditional information related to annotations, our system manages to generate realistic spectrograms that represent audio clips corresponding to specific emotional context. As proof of their validity, apart from three quality metrics (Frechet Inception Distance, Inception Score and Structural Similarity index), we verified the generated samples applying an audio-based emotion recognition schema. When the generated samples are fused with the initial real ones, an improvement between 3.5 to 5.5% was achieved in audio emotion recognition performance for two state-of-the-art datasets.
通过音频进行情感识别是一项相当具有挑战性的任务,需要适当的特征提取和分类。同时,最先进的分类策略通常基于深度学习架构。训练复杂的深度学习网络通常需要非常大的视听语料库和可用的情感注释。然而,这种可用性并不总是得到保证,因为收集和注释这样的数据集是一项耗时的任务。在这项工作中,引入了时间条件Wasserstein生成对抗网络(tc- wgan),通过利用来自面部模态的信息来生成鲁棒的音频数据。使用动态深度学习架构(基于3dCNN、LSTM和Transformer网络)提取时间面部特征作为输入,此外,还有与注释相关的条件信息,我们的系统设法生成逼真的频谱图,代表与特定情感背景对应的音频片段。为了证明它们的有效性,除了三个质量指标(Frechet Inception Distance, Inception Score和Structural Similarity index)外,我们还使用基于音频的情感识别模式验证了生成的样本。当生成的样本与初始真实样本融合时,两个最先进的数据集的音频情感识别性能提高了3.5 - 5.5%。
{"title":"Temporal conditional Wasserstein GANs for audio-visual affect-related ties","authors":"C. Athanasiadis, E. Hortal, Stelios Asteriadis","doi":"10.1109/aciiw52867.2021.9666277","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666277","url":null,"abstract":"Emotion recognition through audio is a rather challenging task that entails proper feature extraction and classification. Meanwhile, state-of-the-art classification strategies are usually based on deep learning architectures. Training complex deep learning networks normally requires very large audiovisual corpora with available emotion annotations. However, such availability is not always guaranteed since harvesting and annotating such datasets is a time-consuming task. In this work, temporal conditional Wasserstein Generative Adversarial Networks (tc-wGANs) are introduced to generate robust audio data by leveraging information from a face modality. Having as input temporal facial features extracted using a dynamic deep learning architecture (based on 3dCNN, LSTM and Transformer networks) and, additionally, conditional information related to annotations, our system manages to generate realistic spectrograms that represent audio clips corresponding to specific emotional context. As proof of their validity, apart from three quality metrics (Frechet Inception Distance, Inception Score and Structural Similarity index), we verified the generated samples applying an audio-based emotion recognition schema. When the generated samples are fused with the initial real ones, an improvement between 3.5 to 5.5% was achieved in audio emotion recognition performance for two state-of-the-art datasets.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133687144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1