Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666258
Samiha Samrose, E. Hoque
In this work, from YouTube News-show multimodal dataset with dyadic speakers having heated discussions, we analyze the toxicity through audio-visual signals. Firstly, as different speakers may contribute differently towards the toxicity, we propose a speaker-wise toxicity score revealing individual proportionate contribution. As discussions with disagreements may reflect some signals of toxicity, in order to identify discussions needing more attention we categorize discussions into binary high-low toxicity levels. By analyzing visual features, we show that the levels correlate with facial expressions as Upper Lid Raiser (associated with ‘surprise’), Dimpler (associated with ‘contempť), and Lip Corner Depressor (associated with ‘disgust’) remain statistically significant in separating high-low intensities of disrespect. Secondly, we investigate the impact of audio-based features such as pitch and intensity that can significantly elicit disrespect, and utilize the signals in classifying disrespect and non-disrespect samples by applying logistic regression model achieving 79.86% accuracy. Our findings shed light on the potential of utilizing audio-visual signals in adding important context towards understanding toxic discussions.
{"title":"Quantifying the Intensity of Toxicity for Discussions and Speakers","authors":"Samiha Samrose, E. Hoque","doi":"10.1109/aciiw52867.2021.9666258","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666258","url":null,"abstract":"In this work, from YouTube News-show multimodal dataset with dyadic speakers having heated discussions, we analyze the toxicity through audio-visual signals. Firstly, as different speakers may contribute differently towards the toxicity, we propose a speaker-wise toxicity score revealing individual proportionate contribution. As discussions with disagreements may reflect some signals of toxicity, in order to identify discussions needing more attention we categorize discussions into binary high-low toxicity levels. By analyzing visual features, we show that the levels correlate with facial expressions as Upper Lid Raiser (associated with ‘surprise’), Dimpler (associated with ‘contempť), and Lip Corner Depressor (associated with ‘disgust’) remain statistically significant in separating high-low intensities of disrespect. Secondly, we investigate the impact of audio-based features such as pitch and intensity that can significantly elicit disrespect, and utilize the signals in classifying disrespect and non-disrespect samples by applying logistic regression model achieving 79.86% accuracy. Our findings shed light on the potential of utilizing audio-visual signals in adding important context towards understanding toxic discussions.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666428
Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita
In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.
{"title":"Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021","authors":"Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita","doi":"10.1109/aciiw52867.2021.9666428","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666428","url":null,"abstract":"In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666270
Nao Takeuchi, Tomoko Koda
The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.
{"title":"Job Interview Training System using Multimodal Behavior Analysis","authors":"Nao Takeuchi, Tomoko Koda","doi":"10.1109/aciiw52867.2021.9666270","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666270","url":null,"abstract":"The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666201
Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666201","url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666252
A. Malhotra, J. Hoey
With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.
{"title":"Emotions in Socio-cultural Interactive AI Agents","authors":"A. Malhotra, J. Hoey","doi":"10.1109/aciiw52867.2021.9666252","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666252","url":null,"abstract":"With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666443
Celia Kessassi
During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.
{"title":"Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations","authors":"Celia Kessassi","doi":"10.1109/aciiw52867.2021.9666443","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666443","url":null,"abstract":"During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666398
Jungah Son
I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.
{"title":"emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool","authors":"Jungah Son","doi":"10.1109/aciiw52867.2021.9666398","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666398","url":null,"abstract":"I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"80 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666288
Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana
What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.
{"title":"Discrete versus Ordinal Time-Continuous Believability Assessment","authors":"Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana","doi":"10.1109/aciiw52867.2021.9666288","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666288","url":null,"abstract":"What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666303
A. Kotov, N. Arinkin, Alexander Filatov, L. Zaidelman, A. Zinina, Kirill Kivva
F-2 companion robot is designed to implement and test various cognitive functions linked with text comprehension, as well as verbal and nonverbal communication strategies. F-2 has a syntactic parser and text comprehension engine, based on productions, where each incoming sentence meaning, or a computer vision event is associated with the most relevant scripts. The script engine is designed to simulate communicative reactions, emotional dynamics, and rational inferences. Scripts are activated, depending on the state of the emotion model, and provide output behavioral packages in Behavior markup language (BML), executed by the robot. We demonstrate simultaneous responses of the robot to the incoming phrases, human gazes, and events in the Tangram puzzle game, where the robot guides the player and emotionally reacts to the game events.
{"title":"Event Representation and Semantics Processing System for F-2 Companion Robot","authors":"A. Kotov, N. Arinkin, Alexander Filatov, L. Zaidelman, A. Zinina, Kirill Kivva","doi":"10.1109/aciiw52867.2021.9666303","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666303","url":null,"abstract":"F-2 companion robot is designed to implement and test various cognitive functions linked with text comprehension, as well as verbal and nonverbal communication strategies. F-2 has a syntactic parser and text comprehension engine, based on productions, where each incoming sentence meaning, or a computer vision event is associated with the most relevant scripts. The script engine is designed to simulate communicative reactions, emotional dynamics, and rational inferences. Scripts are activated, depending on the state of the emotion model, and provide output behavioral packages in Behavior markup language (BML), executed by the robot. We demonstrate simultaneous responses of the robot to the incoming phrases, human gazes, and events in the Tangram puzzle game, where the robot guides the player and emotionally reacts to the game events.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666329
Motoaki Sato, K. Terada, J. Gratch
Emotion expressions show the results of appraising sensory inputs that reflect both the physical and social environment. The observer of the emotion expressions should decode how sensory input appraised in the actor, i.e., reverse appraisal. However, the reverse appraisal is an ill-posed inverse problem because the same emotional expression is produced in different situations and emotion expressions in the same situation vary depending on individual differences. To overcome this difficulty, individuals must have an appropriate appraisal model. Our final goal is to build a social skill training system that trains people who have difficulties in understanding the mental states of others. In the present paper, we show an emotional interactive agent with a transparent appraisal process. It is a future issue to investigate whether social skills can be acquired through our system.
{"title":"Visualization of social emotional appraisal process of an agent","authors":"Motoaki Sato, K. Terada, J. Gratch","doi":"10.1109/aciiw52867.2021.9666329","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666329","url":null,"abstract":"Emotion expressions show the results of appraising sensory inputs that reflect both the physical and social environment. The observer of the emotion expressions should decode how sensory input appraised in the actor, i.e., reverse appraisal. However, the reverse appraisal is an ill-posed inverse problem because the same emotional expression is produced in different situations and emotion expressions in the same situation vary depending on individual differences. To overcome this difficulty, individuals must have an appropriate appraisal model. Our final goal is to build a social skill training system that trains people who have difficulties in understanding the mental states of others. In the present paper, we show an emotional interactive agent with a transparent appraisal process. It is a future issue to investigate whether social skills can be acquired through our system.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128656934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}