首页 > 最新文献

Proceedings of the 7th International Conference on Human-Agent Interaction最新文献

英文 中文
Trainability Leads to Animacy: A Case of a Toy Drone 可训练性导致动画:一个玩具无人机的案例
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352776
Yutai Watanabe, Yuya Onishi, Kazuaki Tanaka, Hideyuki Nakanishi
We conducted an experiment, in which we found that training a drone increased its animacy, attachment to it, and affinity with it. Additionally, we found that the drone impressed trainers as a puppy while it impressed non-trainers as a fly.
我们做了一个实验,在实验中,我们发现训练无人机增加了它的活力,对它的依恋,以及与它的亲和力。此外,我们发现这架无人机给训练人员留下了小狗的印象,而给非训练人员留下了苍蝇的印象。
{"title":"Trainability Leads to Animacy: A Case of a Toy Drone","authors":"Yutai Watanabe, Yuya Onishi, Kazuaki Tanaka, Hideyuki Nakanishi","doi":"10.1145/3349537.3352776","DOIUrl":"https://doi.org/10.1145/3349537.3352776","url":null,"abstract":"We conducted an experiment, in which we found that training a drone increased its animacy, attachment to it, and affinity with it. Additionally, we found that the drone impressed trainers as a puppy while it impressed non-trainers as a fly.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124263363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Let Me Get To Know You Better: Can Interactions Help to Overcome Uncanny Feelings? 让我更好地了解你:互动能帮助克服不可思议的感觉吗?
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351894
Maike Paetzel, Ginevra Castellano
With an ever increasing demand for personal service robots and artificial assistants, companies, start-ups and researchers aim to better understand what makes robot platforms more likable. Some argue that increasing a robot's humanlikeness leads to a higher acceptability. Others, however, find that extremely humanlike robots are perceived as uncanny and are consequently often rejected by users. When investigating people's perception of robots, the focus of the related work lies almost solely on the first impression of these robots, often measured based on images or video clips of the robots alone. Little is known about whether these initial positive or negative feelings persist when giving people the chance to interact with the robot. In this paper, 48 participants were gradually exposed to the capabilities of a robot and their perception of it was tracked from their first impression to after playing a short interactive game with it. We found that initial uncanny feelings towards the robot were significantly decreased after getting to know it better, which further highlights the importance of using real interactive scenarios when studying people's perception of robots. In order to elicit uncanny feelings, we used the 3D blended embodiment Furhat and designed four different facial textures for it. Our work shows that a blended platform can cause different levels of discomfort towards it depending on the facial texture and may thus be an interesting tool for further research on the uncanny valley.
随着对个人服务机器人和人工助手的需求不断增长,企业、初创企业和研究人员都希望更好地了解是什么让机器人平台更受欢迎。一些人认为,增加机器人与人类的相似性会提高机器人的可接受性。然而,其他人发现,极其像人类的机器人被认为是不可思议的,因此经常被用户拒绝。在调查人们对机器人的感知时,相关工作的重点几乎完全在于这些机器人的第一印象,通常仅根据机器人的图像或视频片段来衡量。当人们有机会与机器人互动时,这些最初的积极或消极的感觉是否会持续,我们知之甚少。在这篇论文中,48名参与者逐渐接触到机器人的能力,并跟踪他们对机器人的感知,从他们对机器人的第一印象到与机器人玩了一个简短的互动游戏后。我们发现,在对机器人有了更深入的了解后,人们对机器人最初的神秘感觉明显减少,这进一步凸显了在研究人们对机器人的感知时,使用真实互动场景的重要性。为了引出神秘的感觉,我们使用了3D混合化身Furhat,并为其设计了四种不同的面部纹理。我们的工作表明,混合平台可以根据面部纹理引起不同程度的不适,因此可能是进一步研究恐怖谷的有趣工具。
{"title":"Let Me Get To Know You Better: Can Interactions Help to Overcome Uncanny Feelings?","authors":"Maike Paetzel, Ginevra Castellano","doi":"10.1145/3349537.3351894","DOIUrl":"https://doi.org/10.1145/3349537.3351894","url":null,"abstract":"With an ever increasing demand for personal service robots and artificial assistants, companies, start-ups and researchers aim to better understand what makes robot platforms more likable. Some argue that increasing a robot's humanlikeness leads to a higher acceptability. Others, however, find that extremely humanlike robots are perceived as uncanny and are consequently often rejected by users. When investigating people's perception of robots, the focus of the related work lies almost solely on the first impression of these robots, often measured based on images or video clips of the robots alone. Little is known about whether these initial positive or negative feelings persist when giving people the chance to interact with the robot. In this paper, 48 participants were gradually exposed to the capabilities of a robot and their perception of it was tracked from their first impression to after playing a short interactive game with it. We found that initial uncanny feelings towards the robot were significantly decreased after getting to know it better, which further highlights the importance of using real interactive scenarios when studying people's perception of robots. In order to elicit uncanny feelings, we used the 3D blended embodiment Furhat and designed four different facial textures for it. Our work shows that a blended platform can cause different levels of discomfort towards it depending on the facial texture and may thus be an interesting tool for further research on the uncanny valley.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Team Design Patterns 团队设计模式
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351892
J. Diggelen, Matthew Johnson
This paper introduces the concept of team design patterns and proposes an intuitive graphical language for describing the design choices that influence how intelligent systems (e.g. artificial intelligence, robotics, etc.) collaborate with humans. We build on the notion of design patterns and characterize important dimensions within human-agent teamwork. These dimensions are represented using a simple, intuitive graphical iconic language. The simplicity of the language allows easier expression, sharing and comparison of human-agent teaming concepts. Having such a language has the potential to improve the collaborative interaction among a variety of stakeholders such as end users, project managers, policy makers and programmers that may not be human-agent teamwork experts themselves. We also introduce an ontology and specification formalization that will allow translation of the simple iconic language into more precise definitions. By expressing the essential elements of teaming patterns in precisely defined abstract team design patterns, we provide a foundation that will enable working towards a library of reusable, proven solutions for human-agent teamwork.
本文介绍了团队设计模式的概念,并提出了一种直观的图形语言,用于描述影响智能系统(例如人工智能、机器人等)如何与人类协作的设计选择。我们以设计模式的概念为基础,描述了人-代理团队合作的重要维度。这些维度使用简单、直观的图形符号语言表示。语言的简单性允许更容易地表达、共享和比较人类-代理团队概念。拥有这样一种语言有可能改善各种利益相关者(如最终用户、项目经理、政策制定者和程序员)之间的协作交互,这些利益相关者本身可能不是人类代理团队合作专家。我们还引入了本体和规范形式化,允许将简单的符号语言转换为更精确的定义。通过在精确定义的抽象团队设计模式中表达团队模式的基本元素,我们提供了一个基础,可以为人类代理团队合作提供可重用的、经过验证的解决方案库。
{"title":"Team Design Patterns","authors":"J. Diggelen, Matthew Johnson","doi":"10.1145/3349537.3351892","DOIUrl":"https://doi.org/10.1145/3349537.3351892","url":null,"abstract":"This paper introduces the concept of team design patterns and proposes an intuitive graphical language for describing the design choices that influence how intelligent systems (e.g. artificial intelligence, robotics, etc.) collaborate with humans. We build on the notion of design patterns and characterize important dimensions within human-agent teamwork. These dimensions are represented using a simple, intuitive graphical iconic language. The simplicity of the language allows easier expression, sharing and comparison of human-agent teaming concepts. Having such a language has the potential to improve the collaborative interaction among a variety of stakeholders such as end users, project managers, policy makers and programmers that may not be human-agent teamwork experts themselves. We also introduce an ontology and specification formalization that will allow translation of the simple iconic language into more precise definitions. By expressing the essential elements of teaming patterns in precisely defined abstract team design patterns, we provide a foundation that will enable working towards a library of reusable, proven solutions for human-agent teamwork.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129001373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Design of Cooperative Interaction between Humans and AI Creatures through Reinforcement Learning 基于强化学习的人与人工智能生物合作交互设计
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352771
Ryosuke Takata, Yugo Takeuchi
Currently, it is difficult for humans and AI agents to cooperate because the agent has incomplete intention understanding. In this paper, we propose the design of cooperative interaction between humans and AI creatures. As an experiment, two creatures learned to lift a heavy box in a virtual environment simultaneously. As a result, one of the creatures was able acquire the behavior of following the other creature automatically. We need to verify whether cooperation between humans and AI can be established through ongoing investigations.
目前,由于智能体对意图的理解不完全,人类和人工智能智能体很难进行合作。在本文中,我们提出了人类与人工智能生物之间的合作交互设计。作为一项实验,两个生物学会了在虚拟环境中同时举起一个沉重的盒子。结果,其中一种生物能够自动获得跟随另一种生物的行为。我们需要通过正在进行的调查来验证人类和人工智能之间是否能够建立合作关系。
{"title":"Design of Cooperative Interaction between Humans and AI Creatures through Reinforcement Learning","authors":"Ryosuke Takata, Yugo Takeuchi","doi":"10.1145/3349537.3352771","DOIUrl":"https://doi.org/10.1145/3349537.3352771","url":null,"abstract":"Currently, it is difficult for humans and AI agents to cooperate because the agent has incomplete intention understanding. In this paper, we propose the design of cooperative interaction between humans and AI creatures. As an experiment, two creatures learned to lift a heavy box in a virtual environment simultaneously. As a result, one of the creatures was able acquire the behavior of following the other creature automatically. We need to verify whether cooperation between humans and AI can be established through ongoing investigations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"20 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125648486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Investigation on the Effectiveness of Multimodal Fusion and Temporal Feature Extraction in Reactive and Spontaneous Behavior Generative RNN Models for Listener Agents 响应性和自发行为生成RNN模型中多模态融合和时间特征提取的有效性研究
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351908
Hung-Hsuan Huang, Masato Fukuda, T. Nishida
Like a human listener, a listener agent reacts to its communicational partners' non-verbal behaviors such as head nods, facial expressions, and voice tone. When adopting these modalities as inputs and develop the generative model of reactive and spontaneous behaviors using machine learning techniques, the issues of multimodal fusion emerge. That is, the effectiveness of different modalities, frame-wise interaction of multiple modalities, and temporal feature extraction of individual modalities. This paper describes our investigation on these issues of the task in generating of virtual listeners' reactive and spontaneous idling behaviors. The work is based on the comparison of corresponding recurrent neural network (RNN) configurations in the performance of generating listener's (the agent) head movements, gaze directions, facial expressions, and postures from the speaker's head movements, gaze directions, facial expressions, and voice tone. A data corpus recorded in a subject experiment of active listening is used as the ground truth. The results showed that video information is more effective than audio information, and frame-wise interaction of modalities is more effective than temporal characteristics of individual modalities.
就像人类的倾听者一样,倾听者代理会对交流伙伴的非语言行为做出反应,比如点头、面部表情和语调。当采用这些模式作为输入并使用机器学习技术开发反应性和自发行为的生成模型时,多模式融合的问题就出现了。即不同模态的有效性、多模态的逐帧交互以及单个模态的时间特征提取。本文描述了我们对虚拟听者反应性和自发空转行为产生任务中这些问题的研究。这项工作是基于比较相应的循环神经网络(RNN)配置,从说话者的头部运动、凝视方向、面部表情和语音语调中生成听者(代理)的头部运动、凝视方向、面部表情和姿势。在主动倾听实验中记录的数据语料库被用作基础事实。结果表明,视频信息比音频信息更有效,模态的逐帧交互比单个模态的时间特征更有效。
{"title":"An Investigation on the Effectiveness of Multimodal Fusion and Temporal Feature Extraction in Reactive and Spontaneous Behavior Generative RNN Models for Listener Agents","authors":"Hung-Hsuan Huang, Masato Fukuda, T. Nishida","doi":"10.1145/3349537.3351908","DOIUrl":"https://doi.org/10.1145/3349537.3351908","url":null,"abstract":"Like a human listener, a listener agent reacts to its communicational partners' non-verbal behaviors such as head nods, facial expressions, and voice tone. When adopting these modalities as inputs and develop the generative model of reactive and spontaneous behaviors using machine learning techniques, the issues of multimodal fusion emerge. That is, the effectiveness of different modalities, frame-wise interaction of multiple modalities, and temporal feature extraction of individual modalities. This paper describes our investigation on these issues of the task in generating of virtual listeners' reactive and spontaneous idling behaviors. The work is based on the comparison of corresponding recurrent neural network (RNN) configurations in the performance of generating listener's (the agent) head movements, gaze directions, facial expressions, and postures from the speaker's head movements, gaze directions, facial expressions, and voice tone. A data corpus recorded in a subject experiment of active listening is used as the ground truth. The results showed that video information is more effective than audio information, and frame-wise interaction of modalities is more effective than temporal characteristics of individual modalities.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130123792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Understanding Dialogue Acts by Bayesian Inference and Reinforcement Learning 通过贝叶斯推理和强化学习理解对话行为
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352786
Akane Matsushima, N. Oka, Chie Fukada, Kazuaki Tanaka
evel (Austin 1962). DAs constitute the most fundamental part of communication, and the comprehension of DAs is essential to human-agent interaction. The purpose of this study is to enable an agent to behave properly in response to DAs without their explicit representation on one hand and to estimate the DAs explicitly on the other hand. The former is realized by reinforcement learning and the latter by Bayesian inference. The simulation results demonstrated that the agent not only responded to DAs successfully but also inferred the DAs correctly.
水平(奥斯汀1962)。DAs是沟通中最基本的部分,理解DAs对人机交互至关重要。本研究的目的是一方面使代理能够在没有明确表示的情况下对DAs做出适当的反应,另一方面使代理能够明确地估计DAs。前者通过强化学习实现,后者通过贝叶斯推理实现。仿真结果表明,该智能体不仅能够成功地响应DAs,而且能够正确地推断DAs。
{"title":"Understanding Dialogue Acts by Bayesian Inference and Reinforcement Learning","authors":"Akane Matsushima, N. Oka, Chie Fukada, Kazuaki Tanaka","doi":"10.1145/3349537.3352786","DOIUrl":"https://doi.org/10.1145/3349537.3352786","url":null,"abstract":"evel (Austin 1962). DAs constitute the most fundamental part of communication, and the comprehension of DAs is essential to human-agent interaction. The purpose of this study is to enable an agent to behave properly in response to DAs without their explicit representation on one hand and to estimate the DAs explicitly on the other hand. The former is realized by reinforcement learning and the latter by Bayesian inference. The simulation results demonstrated that the agent not only responded to DAs successfully but also inferred the DAs correctly.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133420081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Evaluation of Relationship between Stroke Pace and Speech Rate for Touch-Care Robot 触摸护理机器人笔触速度与言语速率关系的评价
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352793
Suguru Honda, Taishi Sawabe, Shogo Nishimura, Wataru Sato, Yuichiro Fujimoto, Alexander Plopski, M. Kanbara, H. Kato
Humanitude is a multimodal communication care method that utilizes seeing, touching, and speaking. Moreover, a touch-care method is well known as an effective care method mainly focus on touch motion. These kinds of care techniques are effective in practical situations, however, it is difficult to provide such care therapy to all patients due to the lack of human resources. To address this problem, researchers try to develop a touch-care robot that can provide touch-care automatically. Conventional research of touch-care robot mainly focuses on the movement of stroke or the speech that only considers the impression of the contents of speech but not prosodic information. Therefore, in this research, we focus on the speech rate in the prosodic information with stroke motion. In this work, we investigate the effects of speech rate on the prosodic information and evaluate the relationship between stroke pace and speech rate to improve human comfort. We conducted a user study with 6 participants around 20 years old males. As a result of the list of the questionnaire suggests a correlation between stroke pace and speech rate that provides comfort.
人文关怀是一种视、触、说为一体的多模态沟通关怀方法。此外,触摸护理法是一种以触摸运动为主的有效护理方法。这些护理技术在实际情况下是有效的,但由于人力资源的缺乏,很难为所有患者提供这样的护理治疗。为了解决这个问题,研究人员试图开发一种可以自动提供触摸护理的触摸护理机器人。传统的触摸护理机器人的研究主要集中在笔触的运动或语音上,只考虑语音内容的印象而不考虑韵律信息。因此,在本研究中,我们重点研究了带有笔划运动的韵律信息中的语速。在这项工作中,我们研究了语速对韵律信息的影响,并评估了卒中速度和语速之间的关系,以提高人类的舒适度。我们对6名20岁左右的男性进行了用户研究。调查问卷的结果表明,笔划速度和语速之间存在相关性,这让人感到舒适。
{"title":"Evaluation of Relationship between Stroke Pace and Speech Rate for Touch-Care Robot","authors":"Suguru Honda, Taishi Sawabe, Shogo Nishimura, Wataru Sato, Yuichiro Fujimoto, Alexander Plopski, M. Kanbara, H. Kato","doi":"10.1145/3349537.3352793","DOIUrl":"https://doi.org/10.1145/3349537.3352793","url":null,"abstract":"Humanitude is a multimodal communication care method that utilizes seeing, touching, and speaking. Moreover, a touch-care method is well known as an effective care method mainly focus on touch motion. These kinds of care techniques are effective in practical situations, however, it is difficult to provide such care therapy to all patients due to the lack of human resources. To address this problem, researchers try to develop a touch-care robot that can provide touch-care automatically. Conventional research of touch-care robot mainly focuses on the movement of stroke or the speech that only considers the impression of the contents of speech but not prosodic information. Therefore, in this research, we focus on the speech rate in the prosodic information with stroke motion. In this work, we investigate the effects of speech rate on the prosodic information and evaluate the relationship between stroke pace and speech rate to improve human comfort. We conducted a user study with 6 participants around 20 years old males. As a result of the list of the questionnaire suggests a correlation between stroke pace and speech rate that provides comfort.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Cloud-Based Sentiment Analysis for Interactive Agents 基于云的交互式代理情感分析
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351883
M. Keijsers, C. Bartneck, H. Kazmi
Emotions play an important role in human-agent interaction. To realise natural interaction it is essential for an agent to be able to analyse the sentiment in users' utterances. Modern agents use a distributed service model in which their functions can be located on any number of computers including cloud-based servers. Outsourcing the speech recognition and sentiment analysis to a cloud service enables even simple agents to adapt their behaviour to the emotional state of their users. In this study we test whether sentiment analysis tools can accurately gauge sentiment in human-chatbot interaction. To that effect, we compare the quality of sentiment analysis obtained from three major suppliers of cloud-based sentiment analysis services (Microsoft, Amazon and Google). In addition, we compare their results with the leading lexicon-based software, as well as with human ratings. The results show that although the sentiment analysis tools agree moderately with each other, they do not correlate well with human ratings. While the cloud-based services would be an extremely useful tool for human-agent interaction, their current quality does not justify their usage in human-agent conversations.
情感在人机交互中起着重要的作用。为了实现自然交互,智能体必须能够分析用户话语中的情感。现代代理使用分布式服务模型,其中它们的功能可以位于任意数量的计算机上,包括基于云的服务器。将语音识别和情绪分析外包给云服务,即使是简单的代理也能根据用户的情绪状态调整自己的行为。在这项研究中,我们测试情绪分析工具是否可以准确地衡量人类聊天机器人互动中的情绪。为此,我们比较了从基于云的情感分析服务的三个主要供应商(微软、亚马逊和谷歌)获得的情感分析的质量。此外,我们将他们的结果与领先的基于词典的软件以及人类评分进行比较。结果表明,尽管情感分析工具之间的一致性适度,但它们与人类评分的相关性并不好。虽然基于云的服务对于人机交互来说是一个非常有用的工具,但它们目前的质量并不能证明它们在人机对话中的使用是合理的。
{"title":"Cloud-Based Sentiment Analysis for Interactive Agents","authors":"M. Keijsers, C. Bartneck, H. Kazmi","doi":"10.1145/3349537.3351883","DOIUrl":"https://doi.org/10.1145/3349537.3351883","url":null,"abstract":"Emotions play an important role in human-agent interaction. To realise natural interaction it is essential for an agent to be able to analyse the sentiment in users' utterances. Modern agents use a distributed service model in which their functions can be located on any number of computers including cloud-based servers. Outsourcing the speech recognition and sentiment analysis to a cloud service enables even simple agents to adapt their behaviour to the emotional state of their users. In this study we test whether sentiment analysis tools can accurately gauge sentiment in human-chatbot interaction. To that effect, we compare the quality of sentiment analysis obtained from three major suppliers of cloud-based sentiment analysis services (Microsoft, Amazon and Google). In addition, we compare their results with the leading lexicon-based software, as well as with human ratings. The results show that although the sentiment analysis tools agree moderately with each other, they do not correlate well with human ratings. While the cloud-based services would be an extremely useful tool for human-agent interaction, their current quality does not justify their usage in human-agent conversations.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122150279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Computational Model of Trust-, Pupil-, and Motivation Dynamics 信任、瞳孔和动机动态的计算模型
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3351896
Trond A. Tjøstheim, B. Johansson, C. Balkenius
Autonomous machines are in the near future likely to increasingly interact with humans, and carry out their functions outside controlled settings. Both of these developments increase the requirements of machines to be trustworthy to humans. In this work, we argue that machines may also benefit from being able to explicitly build or withdraw trust with specific humans. The latter is relevant in situations where the integrity of an autonomous system is compromised, or if humans display untrustworthy behaviour towards the system. Examples of systems that could benefit might be delivery robots, maintenance robots, or autonomous taxis. This work contributes by presenting a biologically plausible model of unconditional trust dynamics, which simulates trust building based on familiarity, but which can be modulated by painful and gentle touch. The model displays interactive behaviour by being able to realistically control pupil dynamics, as well as determine approach and avoidance motivation.
在不久的将来,自主机器可能会越来越多地与人类互动,并在受控环境之外执行它们的功能。这两种发展都提高了机器对人类值得信赖的要求。在这项工作中,我们认为机器也可能从能够明确地与特定的人建立或撤销信任中受益。后者在自治系统的完整性受到损害的情况下是相关的,或者如果人类对系统表现出不值得信任的行为。可能受益的系统可能是送货机器人、维修机器人或自动出租车。这项工作通过提出一个生物学上合理的无条件信任动力学模型做出了贡献,该模型模拟了基于熟悉的信任建立,但可以通过痛苦和温柔的触摸来调节。该模型通过能够真实地控制瞳孔动态,以及确定接近和回避动机来显示交互行为。
{"title":"A Computational Model of Trust-, Pupil-, and Motivation Dynamics","authors":"Trond A. Tjøstheim, B. Johansson, C. Balkenius","doi":"10.1145/3349537.3351896","DOIUrl":"https://doi.org/10.1145/3349537.3351896","url":null,"abstract":"Autonomous machines are in the near future likely to increasingly interact with humans, and carry out their functions outside controlled settings. Both of these developments increase the requirements of machines to be trustworthy to humans. In this work, we argue that machines may also benefit from being able to explicitly build or withdraw trust with specific humans. The latter is relevant in situations where the integrity of an autonomous system is compromised, or if humans display untrustworthy behaviour towards the system. Examples of systems that could benefit might be delivery robots, maintenance robots, or autonomous taxis. This work contributes by presenting a biologically plausible model of unconditional trust dynamics, which simulates trust building based on familiarity, but which can be modulated by painful and gentle touch. The model displays interactive behaviour by being able to realistically control pupil dynamics, as well as determine approach and avoidance motivation.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115246728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Development of a Prototype of Face-to-Face Conversational Holographic Agent for Encouraging Co-regulation of Learning 促进学习协同调节的面对面会话全息代理原型的开发
Pub Date : 2019-09-25 DOI: 10.1145/3349537.3352802
Ayane Hisatomi, Yutaka Ishii, T. Mochizuki, Hironori Egi, Yoshihiko Kubota, H. Kato
This paper describes a conversational holographic agent to help learners assess and manage their participation in order to encourage co-regulation in a face-to-face discussion. The agent works with a voice aggregation system which calculates each participant's utterances, silence ratio, ratio of participation, and turn-taking during the discussion in real-time, and produces prompting utterances and non-verbal actions to encourage learners' participation, summarization, and clarification of what they say. During discussions with each other, learner follow the prompts, and might model how the agent regulates the participation.
本文描述了一个会话全息代理,以帮助学习者评估和管理他们的参与,以鼓励共同监管在面对面的讨论。智能体配合语音聚合系统,实时计算每个参与者在讨论过程中的话语、沉默比例、参与比例、轮次,并产生提示话语和非语言动作,鼓励学习者参与、总结和澄清自己所说的内容。在彼此讨论的过程中,学习者遵循提示,并可能模拟代理如何调节参与。
{"title":"Development of a Prototype of Face-to-Face Conversational Holographic Agent for Encouraging Co-regulation of Learning","authors":"Ayane Hisatomi, Yutaka Ishii, T. Mochizuki, Hironori Egi, Yoshihiko Kubota, H. Kato","doi":"10.1145/3349537.3352802","DOIUrl":"https://doi.org/10.1145/3349537.3352802","url":null,"abstract":"This paper describes a conversational holographic agent to help learners assess and manage their participation in order to encourage co-regulation in a face-to-face discussion. The agent works with a voice aggregation system which calculates each participant's utterances, silence ratio, ratio of participation, and turn-taking during the discussion in real-time, and produces prompting utterances and non-verbal actions to encourage learners' participation, summarization, and clarification of what they say. During discussions with each other, learner follow the prompts, and might model how the agent regulates the participation.","PeriodicalId":188834,"journal":{"name":"Proceedings of the 7th International Conference on Human-Agent Interaction","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 7th International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1