Pub Date : 2021-09-28DOI: 10.1109/ACIIW52867.2021.9666315
Raman Goel, Seba Susan, Sachin Vashisht, Armaan Dhanda
Modern day conversational agents are trained to emulate the manner in which humans communicate. To emotionally bond with the user, these virtual agents need to be aware of the affective state of the user. Transformers are the recent state of the art in sequence-to-sequence learning that involves training an encoder-decoder model with word embeddings from utterance-response pairs. We propose an emotion-aware transformer encoder for capturing the emotional quotient in the user utterance in order to generate human-like empathetic responses. The contributions of our paper are as follows: 1) An emotion detector module trained on the input utterances determines the affective state of the user in the initial phase 2) A novel transformer encoder is proposed that adds and normalizes the word embedding with emotion embedding thereby integrating the semantic and affective aspects of the input utterance 3) The encoder and decoder stacks belong to the Transformer-XL architecture which is the recent state of the art in language modeling. Experimentation on the benchmark Facebook AI empathetic dialogue dataset confirms the efficacy of our model from the higher BLEU-4 scores achieved for the generated responses as compared to existing methods. Emotionally intelligent virtual agents are now a reality and inclusion of affect as a modality in all human-machine interfaces is foreseen in the immediate future.
{"title":"Emotion-Aware Transformer Encoder for Empathetic Dialogue Generation","authors":"Raman Goel, Seba Susan, Sachin Vashisht, Armaan Dhanda","doi":"10.1109/ACIIW52867.2021.9666315","DOIUrl":"https://doi.org/10.1109/ACIIW52867.2021.9666315","url":null,"abstract":"Modern day conversational agents are trained to emulate the manner in which humans communicate. To emotionally bond with the user, these virtual agents need to be aware of the affective state of the user. Transformers are the recent state of the art in sequence-to-sequence learning that involves training an encoder-decoder model with word embeddings from utterance-response pairs. We propose an emotion-aware transformer encoder for capturing the emotional quotient in the user utterance in order to generate human-like empathetic responses. The contributions of our paper are as follows: 1) An emotion detector module trained on the input utterances determines the affective state of the user in the initial phase 2) A novel transformer encoder is proposed that adds and normalizes the word embedding with emotion embedding thereby integrating the semantic and affective aspects of the input utterance 3) The encoder and decoder stacks belong to the Transformer-XL architecture which is the recent state of the art in language modeling. Experimentation on the benchmark Facebook AI empathetic dialogue dataset confirms the efficacy of our model from the higher BLEU-4 scores achieved for the generated responses as compared to existing methods. Emotionally intelligent virtual agents are now a reality and inclusion of affect as a modality in all human-machine interfaces is foreseen in the immediate future.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126862580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666338
M. Mainsant, M. Solinas, M. Reyboz, C. Godin, M. Mermillod
Continual learning is a growing challenge of artificial intelligence. Among algorithms alleviating catastrophic forgetting that have been developed in the past years, only few studies were focused on face emotion recognition. In parallel, the field of emotion recognition raised the ethical issue of privacy preserving. This paper presents Dream Net, a privacy preserving continual learning model for face emotion recognition. Using a pseudo-rehearsal approach, this model alleviates catastrophic forgetting by capturing the mapping function of a trained network without storing examples of the learned knowledge. We evaluated Dream Net on the Fer-2013 database and obtained an average accuracy of 45% ± 2 at the end of incremental learning of all classes compare to 16% ± 0 without any continual learning model.
{"title":"Dream Net: a privacy preserving continual leaming model for face emotion recognition","authors":"M. Mainsant, M. Solinas, M. Reyboz, C. Godin, M. Mermillod","doi":"10.1109/aciiw52867.2021.9666338","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666338","url":null,"abstract":"Continual learning is a growing challenge of artificial intelligence. Among algorithms alleviating catastrophic forgetting that have been developed in the past years, only few studies were focused on face emotion recognition. In parallel, the field of emotion recognition raised the ethical issue of privacy preserving. This paper presents Dream Net, a privacy preserving continual learning model for face emotion recognition. Using a pseudo-rehearsal approach, this model alleviates catastrophic forgetting by capturing the mapping function of a trained network without storing examples of the learned knowledge. We evaluated Dream Net on the Fer-2013 database and obtained an average accuracy of 45% ± 2 at the end of incremental learning of all classes compare to 16% ± 0 without any continual learning model.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121695869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666308
Théo Deschamps-Berger
Detected emotional states of speakers are a key component of constructive social relationships but also of efficiency for capturing the degree of emergency. This paper provides an overview of my doctoral project that focuses on bimodal emotion recognition in an emergency call center with deep end-to-end learning techniques using the most advanced approaches such as transformer and zero-shot learning. In this work, we will first propose a supervised classification system for bimodal emotion recognition (paralinguistic and linguistic). Then, we will investigate an unsupervised system as a complement to the previous one in order to deal with “unseen” emotions and mixtures of real-life emotions. Our previous studies mainly explored the acoustic modality of speech emotion recognition (SER), we achieved close to the state-of-the-art results on the improvised part of the well-known database IEMOCAP and we applied our approach to a French emergency database CEMO collected in a previous project. In my thesis, new real recordings in an emergency call center will be collected. The main research topics of my thesis are: Emotional representation and annotation; Speech emotion recognition and ethical implications; Evaluation and real-life trials.
{"title":"Emotion Recognition In Emergency Call Centers: The challenge of real-life emotions","authors":"Théo Deschamps-Berger","doi":"10.1109/aciiw52867.2021.9666308","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666308","url":null,"abstract":"Detected emotional states of speakers are a key component of constructive social relationships but also of efficiency for capturing the degree of emergency. This paper provides an overview of my doctoral project that focuses on bimodal emotion recognition in an emergency call center with deep end-to-end learning techniques using the most advanced approaches such as transformer and zero-shot learning. In this work, we will first propose a supervised classification system for bimodal emotion recognition (paralinguistic and linguistic). Then, we will investigate an unsupervised system as a complement to the previous one in order to deal with “unseen” emotions and mixtures of real-life emotions. Our previous studies mainly explored the acoustic modality of speech emotion recognition (SER), we achieved close to the state-of-the-art results on the improvised part of the well-known database IEMOCAP and we applied our approach to a French emergency database CEMO collected in a previous project. In my thesis, new real recordings in an emergency call center will be collected. The main research topics of my thesis are: Emotional representation and annotation; Speech emotion recognition and ethical implications; Evaluation and real-life trials.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127616898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666373
Dominika Lisy
This document is part of the submission for the doctoral consortium on affective computing and outlines motivation, theoretical background, and my research plan for the PhD project on empathy and social robots. My project idea can be divided into two parts where the first is focusing on theoretical analyses of empathy through binary conceptualisations and re-configuring empathic processes for human-robot-interaction (HRI). I will be drawing from feminist philosophy and empirical work studying signal processing from measurements on the skin and in machines in order to build a model for empathy as a process of crossing boundaries. In the second part I plan to consider implementations of these theoretical ideas in the design of empathic robots. The first part is aiming to understand and dissolve conceptual boundaries whereas the second part is re-establishing material and conceptual boundaries in order to contribute to ethical affective robot design.
{"title":"In-Corpo-Real Robot-Dreams: Empathy, Skin, and Boundaries","authors":"Dominika Lisy","doi":"10.1109/aciiw52867.2021.9666373","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666373","url":null,"abstract":"This document is part of the submission for the doctoral consortium on affective computing and outlines motivation, theoretical background, and my research plan for the PhD project on empathy and social robots. My project idea can be divided into two parts where the first is focusing on theoretical analyses of empathy through binary conceptualisations and re-configuring empathic processes for human-robot-interaction (HRI). I will be drawing from feminist philosophy and empirical work studying signal processing from measurements on the skin and in machines in order to build a model for empathy as a process of crossing boundaries. In the second part I plan to consider implementations of these theoretical ideas in the design of empathic robots. The first part is aiming to understand and dissolve conceptual boundaries whereas the second part is re-establishing material and conceptual boundaries in order to contribute to ethical affective robot design.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115411347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666290
Kim Ngan Phan, Soohyung Kim, Hyung-Jeong Yang, Gueesang Lee
Chronic pain treatment is a significant challenge in the healthcare industry. Physiotherapists tailor physical activity to a patient's activity based on their expression in protective behavior through pain recognition and find the special equipment to help them perform the necessary tasks. The technology can detect and assess pain behavior that could support the delivery of personalized therapies in the long-term and self-directed management of the condition to improve engagement in valued everyday activities. In this paper, we present an approach for task 1 of the Affective Movement Recognition (AffectMove) Challenge in 2021. Our proposed approach using deep learning helps detect persistent protective behavior present or absent during exercise in a person with chronic pain, based on the full-body joint position and back muscle activity of EmoPain challenge 2021 dataset. We employ convolutional neural networks by stacking residual blocks for the multimodal model. Moreover, we suggest new feature groups as additional inputs that help to increase performance for protective behavior. The proposed approach achieves an F1 score of 78.56% on validation set and 59.11% on test set. The proposed approach also outperforms previous baselines in detecting protective behavior from the EmoPain dataset.
{"title":"Multimodal Convolutional Neural Network Model for Protective Behavior Detection based on Body Movement Data","authors":"Kim Ngan Phan, Soohyung Kim, Hyung-Jeong Yang, Gueesang Lee","doi":"10.1109/aciiw52867.2021.9666290","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666290","url":null,"abstract":"Chronic pain treatment is a significant challenge in the healthcare industry. Physiotherapists tailor physical activity to a patient's activity based on their expression in protective behavior through pain recognition and find the special equipment to help them perform the necessary tasks. The technology can detect and assess pain behavior that could support the delivery of personalized therapies in the long-term and self-directed management of the condition to improve engagement in valued everyday activities. In this paper, we present an approach for task 1 of the Affective Movement Recognition (AffectMove) Challenge in 2021. Our proposed approach using deep learning helps detect persistent protective behavior present or absent during exercise in a person with chronic pain, based on the full-body joint position and back muscle activity of EmoPain challenge 2021 dataset. We employ convolutional neural networks by stacking residual blocks for the multimodal model. Moreover, we suggest new feature groups as additional inputs that help to increase performance for protective behavior. The proposed approach achieves an F1 score of 78.56% on validation set and 59.11% on test set. The proposed approach also outperforms previous baselines in detecting protective behavior from the EmoPain dataset.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"174 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114379675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666322
Temitayo A. Olugbade, R. Sagoleo, Simone Ghisio, Nicolas E. Gold, A. Williams, B. Gelder, A. Camurri, G. Volpe, N. Bianchi-Berthouze
We ran the first Affective Movement Recognition (AffectMove) challenge that brings together datasets of affective bodily behaviour across different real-life applications to foster work in this area. Research on automatic detection of naturalistic affective body expressions is still lagging behind detection based on other modalities whereas movement behaviour modelling is a very interesting and very relevant research problem for the affective computing community. The AffectMove challenge aimed to take advantage of existing body movement datasets to address key research problems of automatic recognition of naturalistic and complex affective behaviour from this type of data. Participating teams competed to solve at least one of three tasks based on datasets of different sensors types and real-life problems: multimodal EmoPain dataset for chronic pain physical rehabilitation context, weDraw-l Movement dataset for maths problem solving settings, and multimodal Unige-Maastricht Dance dataset. To foster work across datasets, we also challenged participants to take advantage of the data across datasets to improve performances and also test the generalization of their approach across different applications.
{"title":"The AffectMove 2021 Challenge - Affect Recognition from Naturalistic Movement Data","authors":"Temitayo A. Olugbade, R. Sagoleo, Simone Ghisio, Nicolas E. Gold, A. Williams, B. Gelder, A. Camurri, G. Volpe, N. Bianchi-Berthouze","doi":"10.1109/aciiw52867.2021.9666322","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666322","url":null,"abstract":"We ran the first Affective Movement Recognition (AffectMove) challenge that brings together datasets of affective bodily behaviour across different real-life applications to foster work in this area. Research on automatic detection of naturalistic affective body expressions is still lagging behind detection based on other modalities whereas movement behaviour modelling is a very interesting and very relevant research problem for the affective computing community. The AffectMove challenge aimed to take advantage of existing body movement datasets to address key research problems of automatic recognition of naturalistic and complex affective behaviour from this type of data. Participating teams competed to solve at least one of three tasks based on datasets of different sensors types and real-life problems: multimodal EmoPain dataset for chronic pain physical rehabilitation context, weDraw-l Movement dataset for maths problem solving settings, and multimodal Unige-Maastricht Dance dataset. To foster work across datasets, we also challenged participants to take advantage of the data across datasets to improve performances and also test the generalization of their approach across different applications.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124509645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666356
Balaganesh Mohan, Mirela C. Popa
Affective computing is a subset of the larger field of human-computer interaction, having important connections with cognitive processes, influencing the learning process, decision-making and perception. Out of the multiple means of communication, facial expressions are one of the most widely accepted channels for emotion modulation, receiving an increased attention during the last few years. An important aspect, contributing to their recognition success, concerns modeling the temporal dimension. Therefore, this paper aims to investigate the applicability of current state-of-the-art action recognition techniques to the human emotion recognition task. In particular, two different architectures were investigated, a CNN-based model, named Temporal Shift Module (TSM) that can learn spatiotemporal features in 3D data with the computational complexity of a 2D CNN and a video based vision transformer, employing spatio-temporal self attention. The models were trained and tested on the CREMA-D dataset, demonstrating state-of-the-art performance, with a mean class accuracy of 82% and 77% respectively, while outperforming best previous approaches by at least 3.5%.
{"title":"Temporal based Emotion Recognition inspired by Activity Recognition models","authors":"Balaganesh Mohan, Mirela C. Popa","doi":"10.1109/aciiw52867.2021.9666356","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666356","url":null,"abstract":"Affective computing is a subset of the larger field of human-computer interaction, having important connections with cognitive processes, influencing the learning process, decision-making and perception. Out of the multiple means of communication, facial expressions are one of the most widely accepted channels for emotion modulation, receiving an increased attention during the last few years. An important aspect, contributing to their recognition success, concerns modeling the temporal dimension. Therefore, this paper aims to investigate the applicability of current state-of-the-art action recognition techniques to the human emotion recognition task. In particular, two different architectures were investigated, a CNN-based model, named Temporal Shift Module (TSM) that can learn spatiotemporal features in 3D data with the computational complexity of a 2D CNN and a video based vision transformer, employing spatio-temporal self attention. The models were trained and tested on the CREMA-D dataset, demonstrating state-of-the-art performance, with a mean class accuracy of 82% and 77% respectively, while outperforming best previous approaches by at least 3.5%.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127367991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the problem of Reflective Thinking in children during mathematics related problem solving activities. We present our approach in solving task 2 of the AffectMove challenge, which is Reflective Thinking Detection (RTD) while solving a mathematical activity. We utilize temporal data consisting of 3D joint positions, to construct a series of classifiers that can predict whether the subject appeared to possess reflective thinking ability during the given instance. We tackle the challenge of highly imbalanced data by incorporating and analyzing several meaningful data augmentation techniques and handcrafted features. We then feed different features through a number of machine learning classifiers and select the best performing model. We evaluate our predictions on multiple metrics including accuracy, F1 score, and MCC to work towards a generalized solution for the real-world dataset.
{"title":"Task-based Classification of Reflective Thinking Using Mixture of Classifiers","authors":"Saandeep Aathreya, Liza Jivnani, Shivam Srivastava, Saurabh Hinduja, Shaun J. Canavan","doi":"10.1109/aciiw52867.2021.9666442","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666442","url":null,"abstract":"This paper studies the problem of Reflective Thinking in children during mathematics related problem solving activities. We present our approach in solving task 2 of the AffectMove challenge, which is Reflective Thinking Detection (RTD) while solving a mathematical activity. We utilize temporal data consisting of 3D joint positions, to construct a series of classifiers that can predict whether the subject appeared to possess reflective thinking ability during the given instance. We tackle the challenge of highly imbalanced data by incorporating and analyzing several meaningful data augmentation techniques and handcrafted features. We then feed different features through a number of machine learning classifiers and select the best performing model. We evaluate our predictions on multiple metrics including accuracy, F1 score, and MCC to work towards a generalized solution for the real-world dataset.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129986446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666198
Marios A. Fanourakis, Rayan Elalamy, G. Chanel
Emotion recognition is usually achieved by collecting features (physiological signals, events, facial expressions, etc.) to predict an emotional ground truth. This ground truth, however, is subjective and not always an accurate representation of the emotional state of the subject. In this paper, we show that emotion can be learned in the latent space of machine learning methods without relying on an emotional ground truth. Our data consists of physiological measurements during video gameplay, game events, and subjective rankings of game events for the validation of our hypothesis. By calculating the Kendall ${tau}$ rank correlation between the subjective game event rankings and both the rankings derived from Canonical Correlation Analysis (CCA) and a simple neural network, we show that the latent space of these models is correlated with the subjective rankings even though they were not part of the training data.
{"title":"Modeling Emotions as Latent Representations of Appraisals","authors":"Marios A. Fanourakis, Rayan Elalamy, G. Chanel","doi":"10.1109/aciiw52867.2021.9666198","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666198","url":null,"abstract":"Emotion recognition is usually achieved by collecting features (physiological signals, events, facial expressions, etc.) to predict an emotional ground truth. This ground truth, however, is subjective and not always an accurate representation of the emotional state of the subject. In this paper, we show that emotion can be learned in the latent space of machine learning methods without relying on an emotional ground truth. Our data consists of physiological measurements during video gameplay, game events, and subjective rankings of game events for the validation of our hypothesis. By calculating the Kendall ${tau}$ rank correlation between the subjective game event rankings and both the rankings derived from Canonical Correlation Analysis (CCA) and a simple neural network, we show that the latent space of these models is correlated with the subjective rankings even though they were not part of the training data.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125296864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666259
L. Dai, J. Broekens
Humans use emotional expressions to communicate appraisals. Humans also use emotions in evaluating how they are doing compared to their current goals and desires. The Temporal Difference Reinforcement Learning (TDRL) Theory of Emotion proposes a structure for agents to simulate appropriate emotions during the learning process. In previous work, simulations have shown to reproduce plausible emotion dynamics. In this paper we examine the plausibility and intepretability of TDRL-simulated fear, when expressed by the agent. We presented different TDRL-based fear simulation methods to participants ${left(n=237right)}$ in an online study. Each method used a different action selection protocol for the agent's model-based anticipation process. Results suggest that an ${in}$-greedy fear policy ${left(in=0.1right)}$ combined with a long anticipation horizon provides a plausible fear estimation. This is, to our knowledge, the first experimental evidence detailing some of the predictions of the TDRL Theory of Emotion. Our results are of interest to the design of agent learning methods that are transparent to the user.
{"title":"Simulating Fear as Anticipation of Temporal Differences: An experimental investigation","authors":"L. Dai, J. Broekens","doi":"10.1109/aciiw52867.2021.9666259","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666259","url":null,"abstract":"Humans use emotional expressions to communicate appraisals. Humans also use emotions in evaluating how they are doing compared to their current goals and desires. The Temporal Difference Reinforcement Learning (TDRL) Theory of Emotion proposes a structure for agents to simulate appropriate emotions during the learning process. In previous work, simulations have shown to reproduce plausible emotion dynamics. In this paper we examine the plausibility and intepretability of TDRL-simulated fear, when expressed by the agent. We presented different TDRL-based fear simulation methods to participants ${left(n=237right)}$ in an online study. Each method used a different action selection protocol for the agent's model-based anticipation process. Results suggest that an ${in}$-greedy fear policy ${left(in=0.1right)}$ combined with a long anticipation horizon provides a plausible fear estimation. This is, to our knowledge, the first experimental evidence detailing some of the predictions of the TDRL Theory of Emotion. Our results are of interest to the design of agent learning methods that are transparent to the user.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123074635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}