Variation in muscular tension has important expressive impacts on agent motion; however, it is difficult to tune simulations to achieve particular effects. With a focus on gesture animation, we introduce mass trackers, a lightweight approach that employs proportional derivative control to track point masses that define the position of each wrist. The restriction to point masses allows the derivation of response functions that support straightforward tuning of system behavior. Using the point mass as an end-effector for an inverse kinematics rig allows easy control of both loose and high tension arm motion. Examples illustrate the expressive variation that can be achieved with this tension modulation. Two perceptual studies confirm that these changes impact the overall level of tension perceived in the motion of a gesturing character and further explore the parameter space. Practical guidelines on tuning are discussed.
{"title":"Tunable tension for gesture animation","authors":"Michael Neff","doi":"10.1145/3514197.3549631","DOIUrl":"https://doi.org/10.1145/3514197.3549631","url":null,"abstract":"Variation in muscular tension has important expressive impacts on agent motion; however, it is difficult to tune simulations to achieve particular effects. With a focus on gesture animation, we introduce mass trackers, a lightweight approach that employs proportional derivative control to track point masses that define the position of each wrist. The restriction to point masses allows the derivation of response functions that support straightforward tuning of system behavior. Using the point mass as an end-effector for an inverse kinematics rig allows easy control of both loose and high tension arm motion. Examples illustrate the expressive variation that can be achieved with this tension modulation. Two perceptual studies confirm that these changes impact the overall level of tension perceived in the motion of a gesturing character and further explore the parameter space. Practical guidelines on tuning are discussed.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124588650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we present a socially interactive agent able to adapt its conversational strategies to maximize user's engagement during the interaction. For this purpose, we train our agent with simulated users using deep reinforcement learning. First, the agent estimates the simulated user's engagement depending on the latter's nonverbal behaviors and turn-taking status. This measured engagement is then used as a reward to balance the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on the latter's conversational preferences.
{"title":"Adapting conversational strategies to co-optimize agent's task performance and user's engagement","authors":"L. Galland, C. Pelachaud, Florian Pecune","doi":"10.1145/3514197.3549674","DOIUrl":"https://doi.org/10.1145/3514197.3549674","url":null,"abstract":"In this work, we present a socially interactive agent able to adapt its conversational strategies to maximize user's engagement during the interaction. For this purpose, we train our agent with simulated users using deep reinforcement learning. First, the agent estimates the simulated user's engagement depending on the latter's nonverbal behaviors and turn-taking status. This measured engagement is then used as a reward to balance the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on the latter's conversational preferences.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123341683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heng Yao, A. G. D. Siqueira, Anokhi Bafna, Devon Peterkin, Jenelle A. Richards, Megan L. Rogers, A. Foster, I. Galynker, Benjamin C. Lok
Virtual human interactions are increasingly used for empathy skills training in healthcare by providing feedback during or after the interaction. The post-interview feedback consists of evaluation results of users' empathic responses and can be provided once without interfering with the interaction. However, this type of feedback is insufficient to engage trainees in obtaining a deeper understanding and insights into their learning. The scaffolded ping-pong feedback consists of a multi-round of descriptions explaining how to formulate desired empathic responses to induce users to explore an understanding of how to respond empathically. To increase the training effectiveness to enhance users' expressed empathy, we studied how to apply scaffolded ping-pong feedback in virtual human interactions to train users' empathy skills. In this paper, we studied how different forms of feedback impact users learning how to express empathy to screen-based virtual humans. To evaluate the training effectiveness, we collected 638 empathic responses from 27 clinician participants in the interaction with two virtual patients integrated with scaffolded ping-pong feedback. We compared them with 809 empathic responses from 25 clinician participants in the post-interview condition. The result shows that the scaffolded ping-pong feedback helped clinician participants to provide a higher percentage of medium-empathy level and a lower percentage of low-empathy level empathic responses than the post-interview feedback. The scaffolded ping-pong feedback was perceived as more difficult to use but did not affect the overall interaction experience with virtual patients. This work demonstrates the applicability of integrating ping-pong feedback to strengthen the training effectiveness of virtual human education interventions.
{"title":"A virtual human interaction using scaffolded ping-pong feedback for healthcare learners to practice empathy skills","authors":"Heng Yao, A. G. D. Siqueira, Anokhi Bafna, Devon Peterkin, Jenelle A. Richards, Megan L. Rogers, A. Foster, I. Galynker, Benjamin C. Lok","doi":"10.1145/3514197.3549621","DOIUrl":"https://doi.org/10.1145/3514197.3549621","url":null,"abstract":"Virtual human interactions are increasingly used for empathy skills training in healthcare by providing feedback during or after the interaction. The post-interview feedback consists of evaluation results of users' empathic responses and can be provided once without interfering with the interaction. However, this type of feedback is insufficient to engage trainees in obtaining a deeper understanding and insights into their learning. The scaffolded ping-pong feedback consists of a multi-round of descriptions explaining how to formulate desired empathic responses to induce users to explore an understanding of how to respond empathically. To increase the training effectiveness to enhance users' expressed empathy, we studied how to apply scaffolded ping-pong feedback in virtual human interactions to train users' empathy skills. In this paper, we studied how different forms of feedback impact users learning how to express empathy to screen-based virtual humans. To evaluate the training effectiveness, we collected 638 empathic responses from 27 clinician participants in the interaction with two virtual patients integrated with scaffolded ping-pong feedback. We compared them with 809 empathic responses from 25 clinician participants in the post-interview condition. The result shows that the scaffolded ping-pong feedback helped clinician participants to provide a higher percentage of medium-empathy level and a lower percentage of low-empathy level empathic responses than the post-interview feedback. The scaffolded ping-pong feedback was perceived as more difficult to use but did not affect the overall interaction experience with virtual patients. This work demonstrates the applicability of integrating ping-pong feedback to strengthen the training effectiveness of virtual human education interventions.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prasanth Murali, Farnaz Nouraei, M. Fallah, A. Kearns, Keith Rebello, Teresa K. O'Leary, R. Perkins, N. Joseph, J. Dedier, M. Paasche-Orlow, T. Bickmore
Training laypeople to promote vaccination among their friends and family may be an effective way to boost the reach of vaccination interventions. We describe a virtual agent system that teaches laypeople communication and counseling skills using a combination of a pedagogical agent as well as a role-playing agent that takes on the persona of someone resistant to vaccination. We conducted a preliminary evaluation of the prototype, in which trainees first interacted with the prototype and then had a recorded conversation with a second person who was unvaccinated. Firstly, we found that trainees were mostly adherent to the skills taught by the agent. Secondly, there was a positive correlation of change in unvaccinated individuals' intent to get vaccinated with objective scores of display of empathy by the trainee during their conversation. Thirdly, unvaccinated partners rated the trainees high on relationship quality and use of empathic listening skills.
{"title":"Training lay counselors with virtual agents to promote vaccination","authors":"Prasanth Murali, Farnaz Nouraei, M. Fallah, A. Kearns, Keith Rebello, Teresa K. O'Leary, R. Perkins, N. Joseph, J. Dedier, M. Paasche-Orlow, T. Bickmore","doi":"10.1145/3514197.3549679","DOIUrl":"https://doi.org/10.1145/3514197.3549679","url":null,"abstract":"Training laypeople to promote vaccination among their friends and family may be an effective way to boost the reach of vaccination interventions. We describe a virtual agent system that teaches laypeople communication and counseling skills using a combination of a pedagogical agent as well as a role-playing agent that takes on the persona of someone resistant to vaccination. We conducted a preliminary evaluation of the prototype, in which trainees first interacted with the prototype and then had a recorded conversation with a second person who was unvaccinated. Firstly, we found that trainees were mostly adherent to the skills taught by the agent. Secondly, there was a positive correlation of change in unvaccinated individuals' intent to get vaccinated with objective scores of display of empathy by the trainee during their conversation. Thirdly, unvaccinated partners rated the trainees high on relationship quality and use of empathic listening skills.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128506711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arno Hartholt, Edward Fast, Zong-yong Li, Kevin Kim, Andrew Leeds, S. Mozgai
The research and development (R&D) of intelligent virtual agents (IVAs) is inherently complex. We aim to manage this complexity by combining the best aspects of academic and commercial approaches into a principled R&D platform that emphasizes interoperability, ex-tendability, re-use, and support for multiple hardware targets. This IVA platform, the Virtual Human Toolkit 2.0, is a re-architecture of our earlier work and combines a modular message passing architecture with that of a microservices architecture. This paper discusses our approach, design decisions, lessons learned, and current status of this ongoing effort. We illustrate the strengths of the architecture, how best to use commodity AI cloud services in one's own work, and how to port legacy stand-alone software to a web service.
智能虚拟代理(IVAs)的研究与开发本身就是一个复杂的过程。我们的目标是通过将学术和商业方法的最佳方面结合到一个原则性的研发平台中来管理这种复杂性,该平台强调互操作性、可扩展性、重用性和对多个硬件目标的支持。这个IVA平台(Virtual Human Toolkit 2.0)是我们早期工作的重新架构,并将模块化消息传递架构与微服务架构相结合。本文讨论了我们的方法、设计决策、经验教训以及正在进行的工作的当前状态。我们将说明架构的优势,如何在自己的工作中最好地使用商品AI云服务,以及如何将遗留的独立软件移植到web服务。
{"title":"Re-architecting the virtual human toolkit: towards an interoperable platform for embodied conversational agent research and development","authors":"Arno Hartholt, Edward Fast, Zong-yong Li, Kevin Kim, Andrew Leeds, S. Mozgai","doi":"10.1145/3514197.3549671","DOIUrl":"https://doi.org/10.1145/3514197.3549671","url":null,"abstract":"The research and development (R&D) of intelligent virtual agents (IVAs) is inherently complex. We aim to manage this complexity by combining the best aspects of academic and commercial approaches into a principled R&D platform that emphasizes interoperability, ex-tendability, re-use, and support for multiple hardware targets. This IVA platform, the Virtual Human Toolkit 2.0, is a re-architecture of our earlier work and combines a modular message passing architecture with that of a microservices architecture. This paper discusses our approach, design decisions, lessons learned, and current status of this ongoing effort. We illustrate the strengths of the architecture, how best to use commodity AI cloud services in one's own work, and how to port legacy stand-alone software to a web service.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129247740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christopher You, Rashi Ghosh, Andrew Maxim, J. Stuart, Eric J. Cooks, Benjamin C. Lok
Virtual humans demonstrate the ability to act as non-judgmental conversational partners, eliciting greater self-disclosure. However, it is unclear what virtual human and conversational characteristics are important when self-disclosing. To address this gap, we conducted a set of qualitative, semi-formal interviews (n = 17) among computer science students to investigate participant mental models of willingness to disclose to virtual humans and characteristics of virtual humans that affect their self-disclosure. Our findings indicate that participants' mental models of virtual humans are largely inconsistent with current literature. This inconsistency appears to eliciting hesitancy and discomfort with virtual humans. Furthermore, trust and listening were identified as two primary characteristics of a virtual human interaction that are valuable towards willingness to disclose. Additionally, these characteristics were also valued in different ways for virtual humans in comparison to real humans. From the interviews, we identify and provide guidelines of designing virtual human interactions and conversations to elicit greater willingness to disclose.
{"title":"How does a virtual human earn your trust?: guidelines to improve willingness to self-disclose to intelligent virtual agents","authors":"Christopher You, Rashi Ghosh, Andrew Maxim, J. Stuart, Eric J. Cooks, Benjamin C. Lok","doi":"10.1145/3514197.3549686","DOIUrl":"https://doi.org/10.1145/3514197.3549686","url":null,"abstract":"Virtual humans demonstrate the ability to act as non-judgmental conversational partners, eliciting greater self-disclosure. However, it is unclear what virtual human and conversational characteristics are important when self-disclosing. To address this gap, we conducted a set of qualitative, semi-formal interviews (n = 17) among computer science students to investigate participant mental models of willingness to disclose to virtual humans and characteristics of virtual humans that affect their self-disclosure. Our findings indicate that participants' mental models of virtual humans are largely inconsistent with current literature. This inconsistency appears to eliciting hesitancy and discomfort with virtual humans. Furthermore, trust and listening were identified as two primary characteristics of a virtual human interaction that are valuable towards willingness to disclose. Additionally, these characteristics were also valued in different ways for virtual humans in comparison to real humans. From the interviews, we identify and provide guidelines of designing virtual human interactions and conversations to elicit greater willingness to disclose.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134034307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotions are psychological traits which are associated with an individuals' thoughts, feelings, behavioral responses, and experiences of pleasure and displeasure. The ability to recognise a conversational partner's emotional state from their speech (and respond accordingly) is a longstanding requirement of a fully capable intelligent virtual agent. However, despite the fact that current approaches to emotion recognition primarily depend upon supervised machine learning models, there are no comprehensive guidelines for emotion label annotation of the corpora used to train such models. We present comprehensive guidelines for consistent and effective annotation of text corpora with emotion labels. In particular, our proposal directly addresses the requirements of multi-label emotion recognition, and we demonstrate how an implementation of our proposed guidelines led to substantially (30%) higher agreement score among human annotators.
{"title":"Comprehensive guidelines for emotion annotation","authors":"Md. Adnanul Islam, Md. Saddam Hossain Mukta, P. Olivier, Md. Mahbubur Rahman","doi":"10.1145/3514197.3549640","DOIUrl":"https://doi.org/10.1145/3514197.3549640","url":null,"abstract":"Emotions are psychological traits which are associated with an individuals' thoughts, feelings, behavioral responses, and experiences of pleasure and displeasure. The ability to recognise a conversational partner's emotional state from their speech (and respond accordingly) is a longstanding requirement of a fully capable intelligent virtual agent. However, despite the fact that current approaches to emotion recognition primarily depend upon supervised machine learning models, there are no comprehensive guidelines for emotion label annotation of the corpora used to train such models. We present comprehensive guidelines for consistent and effective annotation of text corpora with emotion labels. In particular, our proposal directly addresses the requirements of multi-label emotion recognition, and we demonstrate how an implementation of our proposed guidelines led to substantially (30%) higher agreement score among human annotators.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132651144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an end-to-end sentiment-aware conversational agent based on two models: a reply sentiment prediction model and a text generation model, conditioned on the predicted sentiment and the context of the dialogue. Additionally, we propose to use a sentiment classification model to evaluate the sentiment expressed by the agent during the development of the model. Results show that explicitly guiding the text generation model with a pre-defined set of sentiment sentences leads to clear improvements, regarding the expressed sentiment and the quality of the generated text.
{"title":"Towards a sentiment-aware conversational agent","authors":"Isabel Dias, Ricardo Rei, Patrícia Pereira, Luísa Coheur","doi":"10.1145/3514197.3549692","DOIUrl":"https://doi.org/10.1145/3514197.3549692","url":null,"abstract":"We propose an end-to-end sentiment-aware conversational agent based on two models: a reply sentiment prediction model and a text generation model, conditioned on the predicted sentiment and the context of the dialogue. Additionally, we propose to use a sentiment classification model to evaluate the sentiment expressed by the agent during the development of the model. Results show that explicitly guiding the text generation model with a pre-defined set of sentiment sentences leads to clear improvements, regarding the expressed sentiment and the quality of the generated text.","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134444603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","authors":"","doi":"10.1145/3514197","DOIUrl":"https://doi.org/10.1145/3514197","url":null,"abstract":"","PeriodicalId":149593,"journal":{"name":"Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114180258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}