Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666320
David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández
Well-being has become a major societal goal. Being well means being physically and mentally healthy. Additionally, feeling empowered is also a component of well-being. Recently, self-tracking has been proposed as means to achieve increased awareness, thus, giving the opportunity to identify and decrease undesired behaviours. However, inappropriately communicated self-tracking results might cause the opposite effect. To address this, a subtle self-tracking feedback by mirroring the self's state into an embodied artificial agent has been proposed. By eliciting empathy towards the artificial agent and fostering helping behaviours, users would help themselves as well. We searched the literature to find supporting or opposing evidence for the robot mirroring framework. The results showed an increasing interest in self-tracking technologies for well-being management. Current discussions disseminate what can be achieved with different levels of automation; the type and relevance of feedback; and the role that artificial agents, such as chatbots and robots, might play to support people's therapies. These findings support further development of the robot mirroring framework to improve medical, hedonic, and eudaemonic well-being.
{"title":"Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self","authors":"David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández","doi":"10.1109/aciiw52867.2021.9666320","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666320","url":null,"abstract":"Well-being has become a major societal goal. Being well means being physically and mentally healthy. Additionally, feeling empowered is also a component of well-being. Recently, self-tracking has been proposed as means to achieve increased awareness, thus, giving the opportunity to identify and decrease undesired behaviours. However, inappropriately communicated self-tracking results might cause the opposite effect. To address this, a subtle self-tracking feedback by mirroring the self's state into an embodied artificial agent has been proposed. By eliciting empathy towards the artificial agent and fostering helping behaviours, users would help themselves as well. We searched the literature to find supporting or opposing evidence for the robot mirroring framework. The results showed an increasing interest in self-tracking technologies for well-being management. Current discussions disseminate what can be achieved with different levels of automation; the type and relevance of feedback; and the role that artificial agents, such as chatbots and robots, might play to support people's therapies. These findings support further development of the robot mirroring framework to improve medical, hedonic, and eudaemonic well-being.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123823659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666270
Nao Takeuchi, Tomoko Koda
The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.
{"title":"Job Interview Training System using Multimodal Behavior Analysis","authors":"Nao Takeuchi, Tomoko Koda","doi":"10.1109/aciiw52867.2021.9666270","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666270","url":null,"abstract":"The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666428
Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita
In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.
{"title":"Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021","authors":"Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita","doi":"10.1109/aciiw52867.2021.9666428","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666428","url":null,"abstract":"In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666201
Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666201","url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666252
A. Malhotra, J. Hoey
With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.
{"title":"Emotions in Socio-cultural Interactive AI Agents","authors":"A. Malhotra, J. Hoey","doi":"10.1109/aciiw52867.2021.9666252","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666252","url":null,"abstract":"With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666398
Jungah Son
I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.
{"title":"emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool","authors":"Jungah Son","doi":"10.1109/aciiw52867.2021.9666398","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666398","url":null,"abstract":"I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"80 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666288
Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana
What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.
{"title":"Discrete versus Ordinal Time-Continuous Believability Assessment","authors":"Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana","doi":"10.1109/aciiw52867.2021.9666288","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666288","url":null,"abstract":"What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666443
Celia Kessassi
During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.
{"title":"Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations","authors":"Celia Kessassi","doi":"10.1109/aciiw52867.2021.9666443","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666443","url":null,"abstract":"During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666417
Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe
The AffecMove challenge organised in the context of the H2020 EnTimeMent project offers three tasks of movement classification in realistic settings and use-cases. Our team, from the EuroMov DHM laboratory participated in Task 1, for protective behaviour (against pain) detection from motion capture data and EMG, in patients suffering from pain-inducing muskuloskeletal disorders. We implemented two simple baseline systems, one LSTM system with pre-training (NTU-60) and a Transformer. We also adapted PA-ResGCN a Graph Convolutional Network for skeleton-based action classification showing state-of-the-art (SOTA) performance to protective behaviour detection, augmented with strategies to handle class-imbalance. For PA-ResGCN-N51 we explored naïve fusion strategies with an EMG-only convolutional neural network that didn't improve the overall performance. Unsurprisingly, the best performing system was PA-ResGCN-N51 (w/o EMG) with a F1 score of 53.36% on the test set for the minority class (MCC 0.4247). The Transformer baseline (MoCap + EMG) came second at 41.05% F1 test performance (MCC 0.3523) and the LSTM baseline third at 31.16% F1 (MCC 0.1763). On the validation set the LSTM showed performance comparable to PA-ResGCN, we hypothesize that the LSTM over-fitted on the validation set that wasn't very representative of the train/test distribution.
{"title":"Comparison of Deep Learning Approaches for Protective Behaviour Detection Under Class Imbalance from MoCap and EMG data","authors":"Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe","doi":"10.1109/aciiw52867.2021.9666417","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666417","url":null,"abstract":"The AffecMove challenge organised in the context of the H2020 EnTimeMent project offers three tasks of movement classification in realistic settings and use-cases. Our team, from the EuroMov DHM laboratory participated in Task 1, for protective behaviour (against pain) detection from motion capture data and EMG, in patients suffering from pain-inducing muskuloskeletal disorders. We implemented two simple baseline systems, one LSTM system with pre-training (NTU-60) and a Transformer. We also adapted PA-ResGCN a Graph Convolutional Network for skeleton-based action classification showing state-of-the-art (SOTA) performance to protective behaviour detection, augmented with strategies to handle class-imbalance. For PA-ResGCN-N51 we explored naïve fusion strategies with an EMG-only convolutional neural network that didn't improve the overall performance. Unsurprisingly, the best performing system was PA-ResGCN-N51 (w/o EMG) with a F1 score of 53.36% on the test set for the minority class (MCC 0.4247). The Transformer baseline (MoCap + EMG) came second at 41.05% F1 test performance (MCC 0.3523) and the LSTM baseline third at 31.16% F1 (MCC 0.1763). On the validation set the LSTM showed performance comparable to PA-ResGCN, we hypothesize that the LSTM over-fitted on the validation set that wasn't very representative of the train/test distribution.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124090928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-28DOI: 10.1109/aciiw52867.2021.9666341
Hannes Ritschel, Thomas Kiderle, E. André
The design and playback of natural and believable movements is a challenge for social robots. They have several limitations due to their physical embodiment, and sometimes also with regard to their software. Taking the example of the expression of happiness, we present an approach for implementing parallel and independent movements for a social robot, which does not have a full-fledged animation API. The technique is able to create more complex movement sequences than a typical sequential playback of poses and utterances and thus is better suited for expression of affect and nonverbal behaviors.
{"title":"Implementing Parallel and Independent Movements for a Social Robot's Affective Expressions","authors":"Hannes Ritschel, Thomas Kiderle, E. André","doi":"10.1109/aciiw52867.2021.9666341","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666341","url":null,"abstract":"The design and playback of natural and believable movements is a challenge for social robots. They have several limitations due to their physical embodiment, and sometimes also with regard to their software. Taking the example of the expression of happiness, we present an approach for implementing parallel and independent movements for a social robot, which does not have a full-fledged animation API. The technique is able to create more complex movement sequences than a typical sequential playback of poses and utterances and thus is better suited for expression of affect and nonverbal behaviors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}