首页 > 最新文献

2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

英文 中文
Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self 机器人镜像:通过培养与代表自我的人工代理的同理心来改善幸福感
David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández
Well-being has become a major societal goal. Being well means being physically and mentally healthy. Additionally, feeling empowered is also a component of well-being. Recently, self-tracking has been proposed as means to achieve increased awareness, thus, giving the opportunity to identify and decrease undesired behaviours. However, inappropriately communicated self-tracking results might cause the opposite effect. To address this, a subtle self-tracking feedback by mirroring the self's state into an embodied artificial agent has been proposed. By eliciting empathy towards the artificial agent and fostering helping behaviours, users would help themselves as well. We searched the literature to find supporting or opposing evidence for the robot mirroring framework. The results showed an increasing interest in self-tracking technologies for well-being management. Current discussions disseminate what can be achieved with different levels of automation; the type and relevance of feedback; and the role that artificial agents, such as chatbots and robots, might play to support people's therapies. These findings support further development of the robot mirroring framework to improve medical, hedonic, and eudaemonic well-being.
幸福已经成为一个主要的社会目标。健康意味着身体和心理都健康。此外,感觉被授权也是幸福的一个组成部分。最近,自我跟踪被提议作为提高意识的手段,从而有机会识别和减少不希望的行为。然而,沟通不当的自我跟踪结果可能会产生相反的效果。为了解决这个问题,提出了一种微妙的自我跟踪反馈,通过将自我状态镜像到一个具体化的人工代理中。通过激发对人工代理的同理心和培养帮助行为,用户也会帮助自己。我们检索了文献来寻找支持或反对机器人镜像框架的证据。研究结果显示,人们对自我跟踪幸福感管理技术的兴趣日益浓厚。目前的讨论传播了不同程度的自动化所能取得的成果;反馈的类型和相关性;人工代理,如聊天机器人和机器人,可能会在支持人们的治疗方面发挥作用。这些发现支持了机器人镜像框架的进一步发展,以改善医疗、享乐和幸福。
{"title":"Robot mirroring: Improving well-being by fostering empathy with an artificial agent representing the self","authors":"David Antonio Gómez Jáuregui, Felix Dollack, Monica Perusquía-Hernández","doi":"10.1109/aciiw52867.2021.9666320","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666320","url":null,"abstract":"Well-being has become a major societal goal. Being well means being physically and mentally healthy. Additionally, feeling empowered is also a component of well-being. Recently, self-tracking has been proposed as means to achieve increased awareness, thus, giving the opportunity to identify and decrease undesired behaviours. However, inappropriately communicated self-tracking results might cause the opposite effect. To address this, a subtle self-tracking feedback by mirroring the self's state into an embodied artificial agent has been proposed. By eliciting empathy towards the artificial agent and fostering helping behaviours, users would help themselves as well. We searched the literature to find supporting or opposing evidence for the robot mirroring framework. The results showed an increasing interest in self-tracking technologies for well-being management. Current discussions disseminate what can be achieved with different levels of automation; the type and relevance of feedback; and the role that artificial agents, such as chatbots and robots, might play to support people's therapies. These findings support further development of the robot mirroring framework to improve medical, hedonic, and eudaemonic well-being.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123823659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Job Interview Training System using Multimodal Behavior Analysis 基于多模态行为分析的求职面试培训系统
Nao Takeuchi, Tomoko Koda
The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.
本文介绍了利用Tobii眼动仪和摄像机识别被采访者的非语言行为,即凝视、面部表情和姿势的系统。该系统将识别结果与受访者的典型非语言行为模型进行比较,并在回放访谈记录时突出需要改进的行为。我们系统的开发目标是构建一个廉价且易于使用的系统,使用商业上可用的HWs、开源代码和一个CG代理,该代理将向受访者提供反馈。系统的初步评估结果表明,该系统在非语言行为的识别精度和与CG代理的交互质量方面还有待提高。
{"title":"Job Interview Training System using Multimodal Behavior Analysis","authors":"Nao Takeuchi, Tomoko Koda","doi":"10.1109/aciiw52867.2021.9666270","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666270","url":null,"abstract":"The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021 保持简单:手工制作功能和调整随机森林和XGBoost来面对2021年的情感运动识别挑战
Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita
In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.
在本文中,我们面临情感运动识别挑战2021,该挑战基于3个关于身体运动的自然数据集,这是日常生活的基本组成部分,既包括构成身体功能的动作的执行,也包括情感、认知和意图的丰富表达。这些数据集分别建立在对慢性疼痛、物理康复、数学问题解决和交互式舞蹈情境的自动检测技术需求的深入理解之上。特别是,我们将依赖于一个单一的,简单而有效的方法,能够在所有3个数据集的文献中与最先进的结果竞争。我们的方法是基于两个步骤的过程:首先,我们将仔细地手工特征能够充分和综合地表示原始数据,然后我们将应用随机森林和XGBoost,仔细调整严格的统计程序,在其之上提供预测。根据挑战的要求,我们将根据三个不同的指标报告结果:准确性、f1分数和马修相关系数。
{"title":"Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021","authors":"Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita","doi":"10.1109/aciiw52867.2021.9666428","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666428","url":null,"abstract":"In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis HirePreter:为自动化面试分析提供细粒度解释的框架
Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.
通过从视频面试中捕捉情感信号来筛选潜在求职者的自动化技术有所增加。这些工具可以使面试过程具有可扩展性和客观性,但它们通常几乎没有提供机器学习模型如何做出影响数千人生计的关键决策的信息。通过结合多实例学习和基于语言建模的模型,我们建立了一个集成模型,可以预测应聘者是否应该被录用。使用特定于模型和不可知模型的解释技术,我们可以破译最具信息量的时间段和驱动模型决策的特征。我们的分析还表明,我们的模型受到视频开头和结尾部分的显著影响。我们的模型在预测应聘者是否应该在ETS的求职面试数据集中被录用方面达到了75.3%的准确率。我们的方法可以扩展到解释其他基于视频的情感计算任务,如分析情绪、衡量可信度或指导个人在团队中更有效地合作。
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666201","url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Emotions in Socio-cultural Interactive AI Agents 社会文化互动AI代理中的情感
A. Malhotra, J. Hoey
With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.
随着人工智能和机器人技术的进步,计算机系统已经在医疗保健、零售、家庭等各个领域得到了许多实际应用。随着人工智能代理成为我们日常生活的一部分,成功的人机交互成为体验的重要组成部分。理解人类社会互动的细微差别仍然是一个具有挑战性的研究领域,但越来越多的人认为,情感认同,或者一个人在特定背景下呈现的社会面貌,是一个关键方面。因此,理解人类所表现出的身份,以及智能体的身份和社会背景,是一个社会互动智能体的关键技能。在本文中,我们概述了一种名为情感控制理论(ACT)的互动社会学理论,以及它的最新扩展BayesACT。我们讨论了这一理论如何跟踪互动的细粒度动态,并探索了情感的相关计算模型如何被社会互动代理使用。ACT考虑文化情感(情感感受)的概念,在游戏中的身份,和情感感受,并旨在成功的互动,以最大限度地提高情感一致性的目标。我们认为,人工智能代理对自身及其所处文化和环境的理解,可以改变人类对代理的感知,从类似机器的东西,转变为可以建立和维持有意义的情感联系的东西。
{"title":"Emotions in Socio-cultural Interactive AI Agents","authors":"A. Malhotra, J. Hoey","doi":"10.1109/aciiw52867.2021.9666252","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666252","url":null,"abstract":"With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool emoPaint:在基于vr的创意工具中探索情感和艺术
Jungah Son
I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.
我介绍了emoPaint,这是一个绘画应用程序,允许用户使用一系列视觉元素创建表达人类情感的绘画。虽然以前的系统已经在3D空间中引入了绘画,但emoPaint专注于支持情感特征,为用户提供预先制作的情感画笔,并允许他们随后改变绘画的表达属性。预先制作的情感画笔包括线条纹理、形状参数和调色板等艺术元素。这使得用户能够在他们的绘画中控制情绪的表达。我描述了我的实现并说明了使用emoPaint创建的绘画。
{"title":"emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool","authors":"Jungah Son","doi":"10.1109/aciiw52867.2021.9666398","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666398","url":null,"abstract":"I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"80 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrete versus Ordinal Time-Continuous Believability Assessment 离散与有序时间-连续可信性评估
Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana
What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.
什么是可信度?我们如何评估它?这些问题仍然是人机交互和游戏研究中的一个挑战。在评估代理的可信度时,研究人员选择了一种让人想起图灵测试的可信度整体视图。目前的评价方法已证明是多种多样的,因此尚未建立一个框架。在本文中,我们建议将可信度视为一种时间连续现象。我们进行了一项研究,让参与者玩一款一对一的射击游戏,并评价角色的可信度。他们面对两个不同的对手,表现出不同的行为。在这个新颖的过程中,这些注释使用两种不同的注释方案(BTrace和RankTrace)实时完成。接下来是用户在两种玩法之间的可信度偏好,有效地允许我们比较两种注释工具以及时间连续评估与离散评估。结果表明,二进制注释工具比连续注释工具使用起来更直观,并提供更多关于上下文的信息。我们的结论是,这种方法可以为当前的评估技术提供必要的补充。
{"title":"Discrete versus Ordinal Time-Continuous Believability Assessment","authors":"Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana","doi":"10.1109/aciiw52867.2021.9666288","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666288","url":null,"abstract":"What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations 虚拟现实模拟中心理社会压力诱导的建模
Celia Kessassi
During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.
在过去的几年里,大量处理心理社会压力的虚拟现实应用已经出现。然而,我们目前对虚拟现实中的压力和社会心理压力的理解阻碍了我们精细控制压力诱导的能力。在我的博士项目中,我计划开发一个计算模型来描述每个诱发心理社会压力的因素各自的影响,包括虚拟现实因素,个人因素和其他情境因素。
{"title":"Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations","authors":"Celia Kessassi","doi":"10.1109/aciiw52867.2021.9666443","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666443","url":null,"abstract":"During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Deep Learning Approaches for Protective Behaviour Detection Under Class Imbalance from MoCap and EMG data 基于动作捕捉和肌电图数据的类不平衡保护行为检测的深度学习方法比较
Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe
The AffecMove challenge organised in the context of the H2020 EnTimeMent project offers three tasks of movement classification in realistic settings and use-cases. Our team, from the EuroMov DHM laboratory participated in Task 1, for protective behaviour (against pain) detection from motion capture data and EMG, in patients suffering from pain-inducing muskuloskeletal disorders. We implemented two simple baseline systems, one LSTM system with pre-training (NTU-60) and a Transformer. We also adapted PA-ResGCN a Graph Convolutional Network for skeleton-based action classification showing state-of-the-art (SOTA) performance to protective behaviour detection, augmented with strategies to handle class-imbalance. For PA-ResGCN-N51 we explored naïve fusion strategies with an EMG-only convolutional neural network that didn't improve the overall performance. Unsurprisingly, the best performing system was PA-ResGCN-N51 (w/o EMG) with a F1 score of 53.36% on the test set for the minority class (MCC 0.4247). The Transformer baseline (MoCap + EMG) came second at 41.05% F1 test performance (MCC 0.3523) and the LSTM baseline third at 31.16% F1 (MCC 0.1763). On the validation set the LSTM showed performance comparable to PA-ResGCN, we hypothesize that the LSTM over-fitted on the validation set that wasn't very representative of the train/test distribution.
在H2020环境项目背景下组织的AffecMove挑战在现实环境和用例中提供了三个运动分类任务。我们来自EuroMov DHM实验室的团队参与了任务1,从运动捕捉数据和肌电图中检测患有疼痛性肌肉骨骼疾病的患者的保护行为(对疼痛)。我们实现了两个简单的基线系统,一个带预训练的LSTM系统(NTU-60)和一个Transformer。我们还改编了PA-ResGCN,这是一个基于骨架的动作分类的图卷积网络,显示了最先进的(SOTA)性能,用于保护行为检测,并增强了处理类别不平衡的策略。对于PA-ResGCN-N51,我们探索了naïve与仅肌电图卷积神经网络的融合策略,但并未提高整体性能。不出所料,表现最好的系统是PA-ResGCN-N51 (w/o EMG),在少数族裔(MCC 0.4247)的测试集上F1得分为53.36%。Transformer基线(MoCap + EMG)以41.05%的F1测试性能(MCC 0.3523)排名第二,LSTM基线以31.16%的F1 (MCC 0.1763)排名第三。在验证集上,LSTM表现出与PA-ResGCN相当的性能,我们假设LSTM在验证集上过度拟合,这不是很能代表训练/测试分布。
{"title":"Comparison of Deep Learning Approaches for Protective Behaviour Detection Under Class Imbalance from MoCap and EMG data","authors":"Karim Radouane, Andon Tchechmedjiev, Binbin Xu, S. Harispe","doi":"10.1109/aciiw52867.2021.9666417","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666417","url":null,"abstract":"The AffecMove challenge organised in the context of the H2020 EnTimeMent project offers three tasks of movement classification in realistic settings and use-cases. Our team, from the EuroMov DHM laboratory participated in Task 1, for protective behaviour (against pain) detection from motion capture data and EMG, in patients suffering from pain-inducing muskuloskeletal disorders. We implemented two simple baseline systems, one LSTM system with pre-training (NTU-60) and a Transformer. We also adapted PA-ResGCN a Graph Convolutional Network for skeleton-based action classification showing state-of-the-art (SOTA) performance to protective behaviour detection, augmented with strategies to handle class-imbalance. For PA-ResGCN-N51 we explored naïve fusion strategies with an EMG-only convolutional neural network that didn't improve the overall performance. Unsurprisingly, the best performing system was PA-ResGCN-N51 (w/o EMG) with a F1 score of 53.36% on the test set for the minority class (MCC 0.4247). The Transformer baseline (MoCap + EMG) came second at 41.05% F1 test performance (MCC 0.3523) and the LSTM baseline third at 31.16% F1 (MCC 0.1763). On the validation set the LSTM showed performance comparable to PA-ResGCN, we hypothesize that the LSTM over-fitted on the validation set that wasn't very representative of the train/test distribution.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124090928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Implementing Parallel and Independent Movements for a Social Robot's Affective Expressions 社交机器人情感表达的平行独立运动实现
Hannes Ritschel, Thomas Kiderle, E. André
The design and playback of natural and believable movements is a challenge for social robots. They have several limitations due to their physical embodiment, and sometimes also with regard to their software. Taking the example of the expression of happiness, we present an approach for implementing parallel and independent movements for a social robot, which does not have a full-fledged animation API. The technique is able to create more complex movement sequences than a typical sequential playback of poses and utterances and thus is better suited for expression of affect and nonverbal behaviors.
对社交机器人来说,设计和回放自然可信的动作是一个挑战。由于它们的物理体现,它们有一些限制,有时也与它们的软件有关。以快乐的表达为例,我们提出了一种实现社交机器人平行和独立运动的方法,该方法没有成熟的动画API。该技术能够创造比典型的姿势和话语的顺序回放更复杂的动作序列,因此更适合于情感表达和非语言行为。
{"title":"Implementing Parallel and Independent Movements for a Social Robot's Affective Expressions","authors":"Hannes Ritschel, Thomas Kiderle, E. André","doi":"10.1109/aciiw52867.2021.9666341","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666341","url":null,"abstract":"The design and playback of natural and believable movements is a challenge for social robots. They have several limitations due to their physical embodiment, and sometimes also with regard to their software. Taking the example of the expression of happiness, we present an approach for implementing parallel and independent movements for a social robot, which does not have a full-fledged animation API. The technique is able to create more complex movement sequences than a typical sequential playback of poses and utterances and thus is better suited for expression of affect and nonverbal behaviors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1