首页 > 最新文献

2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)最新文献

英文 中文
Quantifying the Intensity of Toxicity for Discussions and Speakers 量化讨论和演讲者的毒性强度
Samiha Samrose, E. Hoque
In this work, from YouTube News-show multimodal dataset with dyadic speakers having heated discussions, we analyze the toxicity through audio-visual signals. Firstly, as different speakers may contribute differently towards the toxicity, we propose a speaker-wise toxicity score revealing individual proportionate contribution. As discussions with disagreements may reflect some signals of toxicity, in order to identify discussions needing more attention we categorize discussions into binary high-low toxicity levels. By analyzing visual features, we show that the levels correlate with facial expressions as Upper Lid Raiser (associated with ‘surprise’), Dimpler (associated with ‘contempť), and Lip Corner Depressor (associated with ‘disgust’) remain statistically significant in separating high-low intensities of disrespect. Secondly, we investigate the impact of audio-based features such as pitch and intensity that can significantly elicit disrespect, and utilize the signals in classifying disrespect and non-disrespect samples by applying logistic regression model achieving 79.86% accuracy. Our findings shed light on the potential of utilizing audio-visual signals in adding important context towards understanding toxic discussions.
在这项工作中,我们从YouTube新闻节目多模态数据集与二元发言者进行了激烈的讨论,我们通过视听信号分析毒性。首先,由于不同的讲话者对毒性的贡献不同,我们提出了一个讲话者毒性评分,揭示了个人的比例贡献。由于有分歧的讨论可能反映出一些毒性信号,为了确定需要更多关注的讨论,我们将讨论分为二元高-低毒性水平。通过分析视觉特征,我们发现该水平与面部表情相关,如上眼睑抬高(与“惊讶”相关),酒窝(与“蔑视”相关)和唇角下降(与“厌恶”相关)在区分高低强度的不尊重方面仍然具有统计学意义。其次,我们研究了音调和强度等基于音频的特征对不尊重行为的影响,并利用逻辑回归模型对不尊重和非不尊重样本进行分类,准确率达到79.86%。我们的研究结果揭示了利用视听信号为理解毒性讨论增加重要背景的潜力。
{"title":"Quantifying the Intensity of Toxicity for Discussions and Speakers","authors":"Samiha Samrose, E. Hoque","doi":"10.1109/aciiw52867.2021.9666258","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666258","url":null,"abstract":"In this work, from YouTube News-show multimodal dataset with dyadic speakers having heated discussions, we analyze the toxicity through audio-visual signals. Firstly, as different speakers may contribute differently towards the toxicity, we propose a speaker-wise toxicity score revealing individual proportionate contribution. As discussions with disagreements may reflect some signals of toxicity, in order to identify discussions needing more attention we categorize discussions into binary high-low toxicity levels. By analyzing visual features, we show that the levels correlate with facial expressions as Upper Lid Raiser (associated with ‘surprise’), Dimpler (associated with ‘contempť), and Lip Corner Depressor (associated with ‘disgust’) remain statistically significant in separating high-low intensities of disrespect. Secondly, we investigate the impact of audio-based features such as pitch and intensity that can significantly elicit disrespect, and utilize the signals in classifying disrespect and non-disrespect samples by applying logistic regression model achieving 79.86% accuracy. Our findings shed light on the potential of utilizing audio-visual signals in adding important context towards understanding toxic discussions.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123559581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021 保持简单:手工制作功能和调整随机森林和XGBoost来面对2021年的情感运动识别挑战
Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita
In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.
在本文中,我们面临情感运动识别挑战2021,该挑战基于3个关于身体运动的自然数据集,这是日常生活的基本组成部分,既包括构成身体功能的动作的执行,也包括情感、认知和意图的丰富表达。这些数据集分别建立在对慢性疼痛、物理康复、数学问题解决和交互式舞蹈情境的自动检测技术需求的深入理解之上。特别是,我们将依赖于一个单一的,简单而有效的方法,能够在所有3个数据集的文献中与最先进的结果竞争。我们的方法是基于两个步骤的过程:首先,我们将仔细地手工特征能够充分和综合地表示原始数据,然后我们将应用随机森林和XGBoost,仔细调整严格的统计程序,在其之上提供预测。根据挑战的要求,我们将根据三个不同的指标报告结果:准确性、f1分数和马修相关系数。
{"title":"Keep it Simple: Handcrafting Feature and Tuning Random Forests and XGBoost to face the Affective Movement Recognition Challenge 2021","authors":"Vincenzo D'Amato, L. Oneto, A. Camurri, D. Anguita","doi":"10.1109/aciiw52867.2021.9666428","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666428","url":null,"abstract":"In this paper, we face the Affective Movement Recognition Challenge 2021 which is based on 3 naturalistic datasets on body movement, which is a fundamental component of everyday living both in the execution of the actions that make up physical functioning as well as in rich expression of affect, cognition, and intent. The datasets were built on deep understanding of the requirements of automatic detection technology for chronic pain physical rehabilitation, maths problem solving, and interactive dance contexts respectively. In particular, we will rely on a single, simple yet effective, approach able to be competitive with state-of-the-art results in the literature on all of the 3 datasets. Our approach is based on a two step procedure: first we will carefully handcraft features able to fully and synthetically represent the raw data and then we will apply Random Forest and XGBoost, carefully tuned with rigorous statistical procedures, on top of it to deliver the predictions. As requested by the challenge, we will report results in terms of three different metrics: accuracy, F1-score, and Matthew Correlation Coefficient.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124053699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Job Interview Training System using Multimodal Behavior Analysis 基于多模态行为分析的求职面试培训系统
Nao Takeuchi, Tomoko Koda
The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.
本文介绍了利用Tobii眼动仪和摄像机识别被采访者的非语言行为,即凝视、面部表情和姿势的系统。该系统将识别结果与受访者的典型非语言行为模型进行比较,并在回放访谈记录时突出需要改进的行为。我们系统的开发目标是构建一个廉价且易于使用的系统,使用商业上可用的HWs、开源代码和一个CG代理,该代理将向受访者提供反馈。系统的初步评估结果表明,该系统在非语言行为的识别精度和与CG代理的交互质量方面还有待提高。
{"title":"Job Interview Training System using Multimodal Behavior Analysis","authors":"Nao Takeuchi, Tomoko Koda","doi":"10.1109/aciiw52867.2021.9666270","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666270","url":null,"abstract":"The paper introduces our system that recognizes the nonverbal behaviors of an interviewee, namely gaze, facial expression, and posture using a Tobii eye tracker and cameras. The system compares the recognition results with those of models of exemplary nonverbal behaviors of an interviewee and highlights the behaviors that need improvement while playing back the interview recording. The development goal for our system was to construct an inexpensive and easy-to-use system using commercially available HWs, open-source code, and a CG agent that would provide feedback to the interviewee. The results of the initial evaluation of the system indicate that improvements in the recognition accuracy of nonverbal behaviors and the quality of the interaction with the CG agent are needed.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127381446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis HirePreter:为自动化面试分析提供细粒度解释的框架
Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque
There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.
通过从视频面试中捕捉情感信号来筛选潜在求职者的自动化技术有所增加。这些工具可以使面试过程具有可扩展性和客观性,但它们通常几乎没有提供机器学习模型如何做出影响数千人生计的关键决策的信息。通过结合多实例学习和基于语言建模的模型,我们建立了一个集成模型,可以预测应聘者是否应该被录用。使用特定于模型和不可知模型的解释技术,我们可以破译最具信息量的时间段和驱动模型决策的特征。我们的分析还表明,我们的模型受到视频开头和结尾部分的显著影响。我们的模型在预测应聘者是否应该在ETS的求职面试数据集中被录用方面达到了75.3%的准确率。我们的方法可以扩展到解释其他基于视频的情感计算任务,如分析情绪、衡量可信度或指导个人在团队中更有效地合作。
{"title":"HirePreter: A Framework for Providing Fine-grained Interpretation for Automated Job Interview Analysis","authors":"Wasifur Rahman, Sazan Mahbub, Asif Salekin, M. Hasan, E. Hoque","doi":"10.1109/aciiw52867.2021.9666201","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666201","url":null,"abstract":"There has been a rise in automated technologies to screen potential job applicants through affective signals captured from video-based interviews. These tools can make the interview process scalable and objective, but they often provide little to no information of how the machine learning model is making crucial decisions that impacts the livelihood of thousands of people. We built an ensemble model – by combining Multiple-Instance-Learning and Language-Modeling based models – that can predict whether an interviewee should be hired or not. Using both model-specific and model-agnostic interpretation techniques, we can decipher the most informative time-segments and features driving the model's decision making. Our analysis also shows that our models are significantly impacted by the beginning and ending portions of the video. Our model achieves 75.3% accuracy in predicting whether an interviewee should be hired on the ETS Job Interview dataset. Our approach can be extended to interpret other video-based affective computing tasks like analyzing sentiment, measuring credibility, or coaching individuals to collaborate more effectively in a team.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Emotions in Socio-cultural Interactive AI Agents 社会文化互动AI代理中的情感
A. Malhotra, J. Hoey
With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.
随着人工智能和机器人技术的进步,计算机系统已经在医疗保健、零售、家庭等各个领域得到了许多实际应用。随着人工智能代理成为我们日常生活的一部分,成功的人机交互成为体验的重要组成部分。理解人类社会互动的细微差别仍然是一个具有挑战性的研究领域,但越来越多的人认为,情感认同,或者一个人在特定背景下呈现的社会面貌,是一个关键方面。因此,理解人类所表现出的身份,以及智能体的身份和社会背景,是一个社会互动智能体的关键技能。在本文中,我们概述了一种名为情感控制理论(ACT)的互动社会学理论,以及它的最新扩展BayesACT。我们讨论了这一理论如何跟踪互动的细粒度动态,并探索了情感的相关计算模型如何被社会互动代理使用。ACT考虑文化情感(情感感受)的概念,在游戏中的身份,和情感感受,并旨在成功的互动,以最大限度地提高情感一致性的目标。我们认为,人工智能代理对自身及其所处文化和环境的理解,可以改变人类对代理的感知,从类似机器的东西,转变为可以建立和维持有意义的情感联系的东西。
{"title":"Emotions in Socio-cultural Interactive AI Agents","authors":"A. Malhotra, J. Hoey","doi":"10.1109/aciiw52867.2021.9666252","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666252","url":null,"abstract":"With the advancement of AI and Robotics, computer systems have been put to many practical uses in a variety of domains like healthcare, retail, households, and more. As AI agents become a part of our day-to-day life, successful human-machine interaction becomes an essential part of the experience. Understanding the nuances of human social interaction remains a challenging area of research, but there is growing consensus that emotional identity, or what social face a person presents in a given context, is a critical aspect. Therefore, understanding the identities displayed by humans, and the identity of agents and the social context, is a crucial skill for a socially interactive agent. In this paper, we provide an overview of a sociological theory of interaction called Affect Control Theory (ACT), and its recent extension, BayesACT. We discuss how this theory can track fine grained dynamics of an interaction, and explore how the associated computational model of emotion can be used by socially interactive agents. ACT considers the cultural sentiments (emotional feelings) about concepts for the context, the identities at play, and the emotions felt, and aims towards a successful interaction with the aim of maximizing emotional coherence. We argue that an AI agent's understanding of itself, and of the culture and context it is in, can change human perception of an agent from something that is machine-like, to something that can establish and maintain a meaningful emotional connection.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133827809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations 虚拟现实模拟中心理社会压力诱导的建模
Celia Kessassi
During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.
在过去的几年里,大量处理心理社会压力的虚拟现实应用已经出现。然而,我们目前对虚拟现实中的压力和社会心理压力的理解阻碍了我们精细控制压力诱导的能力。在我的博士项目中,我计划开发一个计算模型来描述每个诱发心理社会压力的因素各自的影响,包括虚拟现实因素,个人因素和其他情境因素。
{"title":"Modeling the Induction of Psychosocial Stress in Virtual Reality Simulations","authors":"Celia Kessassi","doi":"10.1109/aciiw52867.2021.9666443","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666443","url":null,"abstract":"During the last few years, a wide number of virtual reality applications dealing with psychosocial stress have emerged. However, our current understanding of stress and psychosocial stress in virtual reality hinders our ability to finely control stress induction. In my PhD project I plan to develop a computational model which will describe the respective impact of each factor inducing psychosocial stress, including virtual reality factors, personal factors and other situational factors.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123483823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool emoPaint:在基于vr的创意工具中探索情感和艺术
Jungah Son
I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.
我介绍了emoPaint,这是一个绘画应用程序,允许用户使用一系列视觉元素创建表达人类情感的绘画。虽然以前的系统已经在3D空间中引入了绘画,但emoPaint专注于支持情感特征,为用户提供预先制作的情感画笔,并允许他们随后改变绘画的表达属性。预先制作的情感画笔包括线条纹理、形状参数和调色板等艺术元素。这使得用户能够在他们的绘画中控制情绪的表达。我描述了我的实现并说明了使用emoPaint创建的绘画。
{"title":"emoPaint: Exploring Emotion and Art in a VR-based Creativity Tool","authors":"Jungah Son","doi":"10.1109/aciiw52867.2021.9666398","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666398","url":null,"abstract":"I present emoPaint, a painting application that allows users to create paintings expressive of human emotions with the range of visual elements. While previous systems have introduced painting in 3D space, emoPaint focuses on supporting emotional characteristics by providing pre-made emotion brushes to users and allowing them to subsequently change the expressive properties of their paintings. Pre-made emotion brushes include art elements such as line textures, shape parameters and color palettes. This enables users to control expression of emotions in their paintings. I describe my implementation and illustrate paintings created using emoPaint.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"80 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113933303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrete versus Ordinal Time-Continuous Believability Assessment 离散与有序时间-连续可信性评估
Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana
What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.
什么是可信度?我们如何评估它?这些问题仍然是人机交互和游戏研究中的一个挑战。在评估代理的可信度时,研究人员选择了一种让人想起图灵测试的可信度整体视图。目前的评价方法已证明是多种多样的,因此尚未建立一个框架。在本文中,我们建议将可信度视为一种时间连续现象。我们进行了一项研究,让参与者玩一款一对一的射击游戏,并评价角色的可信度。他们面对两个不同的对手,表现出不同的行为。在这个新颖的过程中,这些注释使用两种不同的注释方案(BTrace和RankTrace)实时完成。接下来是用户在两种玩法之间的可信度偏好,有效地允许我们比较两种注释工具以及时间连续评估与离散评估。结果表明,二进制注释工具比连续注释工具使用起来更直观,并提供更多关于上下文的信息。我们的结论是,这种方法可以为当前的评估技术提供必要的补充。
{"title":"Discrete versus Ordinal Time-Continuous Believability Assessment","authors":"Cristiana Pacheco, Dávid Melhárt, Antonios Liapis, Georgios N. Yannakakis, Diego Pérez-Liébana","doi":"10.1109/aciiw52867.2021.9666288","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666288","url":null,"abstract":"What is believability? And how do we assess it? These questions remain a challenge in human-computer interaction and games research. When assessing the believability of agents, researchers opt for an overall view of believability reminiscent of the Turing test. Current evaluation approaches have proven to be diverse and, thus, have yet to establish a framework. In this paper, we propose treating believability as a time-continuous phenomenon. We have conducted a study in which participants play a one-versus-one shooter game and annotate the character's believability. They face two different opponents which present different behaviours. In this novel process, these annotations are done moment-to-moment using two different annotation schemes: BTrace and RankTrace. This is followed by the user's believability preference between the two playthroughs, effectively allowing us to compare the two annotation tools and time-continuous assessment with discrete assessment. Results suggest that a binary annotation tool could be more intuitive to use than its continuous counterpart and provides more information on context. We conclude that this method may offer a necessary addition to current assessment techniques.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event Representation and Semantics Processing System for F-2 Companion Robot F-2同伴机器人事件表示与语义处理系统
A. Kotov, N. Arinkin, Alexander Filatov, L. Zaidelman, A. Zinina, Kirill Kivva
F-2 companion robot is designed to implement and test various cognitive functions linked with text comprehension, as well as verbal and nonverbal communication strategies. F-2 has a syntactic parser and text comprehension engine, based on productions, where each incoming sentence meaning, or a computer vision event is associated with the most relevant scripts. The script engine is designed to simulate communicative reactions, emotional dynamics, and rational inferences. Scripts are activated, depending on the state of the emotion model, and provide output behavioral packages in Behavior markup language (BML), executed by the robot. We demonstrate simultaneous responses of the robot to the incoming phrases, human gazes, and events in the Tangram puzzle game, where the robot guides the player and emotionally reacts to the game events.
F-2同伴机器人旨在实现和测试与文本理解相关的各种认知功能,以及语言和非语言交际策略。F-2有一个基于产品的语法解析器和文本理解引擎,其中每个传入的句子含义或计算机视觉事件与最相关的脚本相关联。脚本引擎被设计用来模拟交流反应、情感动态和理性推理。脚本根据情感模型的状态被激活,并以行为标记语言(BML)提供输出行为包,由机器人执行。我们演示了机器人对传入短语、人类目光和七巧板拼图游戏中的事件的同步反应,其中机器人引导玩家并对游戏事件做出情感反应。
{"title":"Event Representation and Semantics Processing System for F-2 Companion Robot","authors":"A. Kotov, N. Arinkin, Alexander Filatov, L. Zaidelman, A. Zinina, Kirill Kivva","doi":"10.1109/aciiw52867.2021.9666303","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666303","url":null,"abstract":"F-2 companion robot is designed to implement and test various cognitive functions linked with text comprehension, as well as verbal and nonverbal communication strategies. F-2 has a syntactic parser and text comprehension engine, based on productions, where each incoming sentence meaning, or a computer vision event is associated with the most relevant scripts. The script engine is designed to simulate communicative reactions, emotional dynamics, and rational inferences. Scripts are activated, depending on the state of the emotion model, and provide output behavioral packages in Behavior markup language (BML), executed by the robot. We demonstrate simultaneous responses of the robot to the incoming phrases, human gazes, and events in the Tangram puzzle game, where the robot guides the player and emotionally reacts to the game events.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114309688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization of social emotional appraisal process of an agent 行为人社会情感评价过程的可视化
Motoaki Sato, K. Terada, J. Gratch
Emotion expressions show the results of appraising sensory inputs that reflect both the physical and social environment. The observer of the emotion expressions should decode how sensory input appraised in the actor, i.e., reverse appraisal. However, the reverse appraisal is an ill-posed inverse problem because the same emotional expression is produced in different situations and emotion expressions in the same situation vary depending on individual differences. To overcome this difficulty, individuals must have an appropriate appraisal model. Our final goal is to build a social skill training system that trains people who have difficulties in understanding the mental states of others. In the present paper, we show an emotional interactive agent with a transparent appraisal process. It is a future issue to investigate whether social skills can be acquired through our system.
情绪表达显示了对反映物理和社会环境的感官输入的评价结果。情绪表达的观察者应该解码感官输入如何在行为人中被评价,即反向评价。然而,反向评价是一个不适定反问题,因为在不同的情境下会产生相同的情绪表达,而同一情境下的情绪表达会因个体差异而有所不同。为了克服这个困难,个人必须有一个合适的评估模型。我们的最终目标是建立一个社会技能训练系统,训练那些在理解他人心理状态方面有困难的人。在本文中,我们展示了一个具有透明评估过程的情感交互代理。研究社会技能能否通过我们的系统获得是一个未来的问题。
{"title":"Visualization of social emotional appraisal process of an agent","authors":"Motoaki Sato, K. Terada, J. Gratch","doi":"10.1109/aciiw52867.2021.9666329","DOIUrl":"https://doi.org/10.1109/aciiw52867.2021.9666329","url":null,"abstract":"Emotion expressions show the results of appraising sensory inputs that reflect both the physical and social environment. The observer of the emotion expressions should decode how sensory input appraised in the actor, i.e., reverse appraisal. However, the reverse appraisal is an ill-posed inverse problem because the same emotional expression is produced in different situations and emotion expressions in the same situation vary depending on individual differences. To overcome this difficulty, individuals must have an appropriate appraisal model. Our final goal is to build a social skill training system that trains people who have difficulties in understanding the mental states of others. In the present paper, we show an emotional interactive agent with a transparent appraisal process. It is a future issue to investigate whether social skills can be acquired through our system.","PeriodicalId":105376,"journal":{"name":"2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)","volume":"301 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128656934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 9th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1