首页 > 最新文献

Proceedings of the Fourth International Conference on Human Agent Interaction最新文献

英文 中文
Model-Driven Gaze Simulation for the Blind Person in Face-to-Face Communication 基于模型驱动的盲人面对面交流凝视仿真
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980482
S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu
In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses (E-Gaze), an assistive device based on an eye tracking system. It simulates gaze for the blind person to react and engage the sighted in face-to-face conversations. It is designed based on a model that combines eye-contact mechanism and turn-taking strategy. We further propose an experimental design to test the E-Gaze and hypothesize that the model-driven gaze simulation can enhance the conversation quality between the sighted and the blind person in face-to-face communication.
在面对面的交流中,目光是对话中不可或缺的一部分,是口头语言的补充。在社会交往中,视力正常的人经常用目光来传达非语言信息,而盲人无法接触到这些信息并做出反应。在本文中,我们提出了E-Gaze眼镜(E-Gaze),一种基于眼动追踪系统的辅助设备。它模拟了盲人的凝视,让盲人在面对面的对话中做出反应和参与。它的设计基于一个结合了目光接触机制和轮询策略的模型。我们进一步提出了一个实验设计来测试E-Gaze,并假设模型驱动的凝视模拟可以提高正常人和盲人面对面交流的对话质量。
{"title":"Model-Driven Gaze Simulation for the Blind Person in Face-to-Face Communication","authors":"S. Qiu, S. A. Anas, Hirotaka Osawa, G.W.M. Rauterberg, Jun Hu","doi":"10.1145/2974804.2980482","DOIUrl":"https://doi.org/10.1145/2974804.2980482","url":null,"abstract":"In face-to-face communication, eye gaze is integral to a conversation to supplement verbal language. The sighted often uses eye gaze to convey nonverbal information in social interactions, which a blind conversation partner cannot access and react to them. In this paper, we present E-Gaze glasses (E-Gaze), an assistive device based on an eye tracking system. It simulates gaze for the blind person to react and engage the sighted in face-to-face conversations. It is designed based on a model that combines eye-contact mechanism and turn-taking strategy. We further propose an experimental design to test the E-Gaze and hypothesize that the model-driven gaze simulation can enhance the conversation quality between the sighted and the blind person in face-to-face communication.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121933373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Attention Estimation for Child-Robot Interaction 儿童-机器人交互的注意力估计
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980510
M. Attamimi, M. Miyata, Tetsuji Yamada, T. Omori, Ryoma Hida
In this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction. First, we developed a system that could sense a child's verbal and non- verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.
在本文中,我们提出了一种在儿童-机器人交互的自由游戏场景中估计儿童注意力的方法,这是人类更重要的心理状态之一。首先,我们开发了一个系统,可以感知孩子的语言和非语言多模态信号,如凝视、面部表情、接近等。然后,利用观察到的信息训练支持向量机(SVM)来估计人的注意力水平。通过与人类判断的估计进行比较,研究了该方法的准确性,并得到了一些有希望的结果,在此讨论。
{"title":"Attention Estimation for Child-Robot Interaction","authors":"M. Attamimi, M. Miyata, Tetsuji Yamada, T. Omori, Ryoma Hida","doi":"10.1145/2974804.2980510","DOIUrl":"https://doi.org/10.1145/2974804.2980510","url":null,"abstract":"In this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction. First, we developed a system that could sense a child's verbal and non- verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Pre-scheduled Turn-Taking between Robots to Make Conversation Coherent 在机器人之间预先安排轮流,使对话连贯
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974819
T. Iio, Y. Yoshikawa, H. Ishiguro
Since a talking robot cannot escape from errors in recognizing user's speech in daily environment, its verbal responses are sometimes felt as incoherent with the context of conversation. This paper presents a solution to this problem that generates a social context where a user is guided to find coherency of the robot's utterances, even though its response is produced according to incorrect recognition of user's speech. We designed a novel turn-taking pattern in which two robots behave according to a pre-scheduled scenario to generate such a social context. Two experiments proved that participants who talked to two robots using that turn-taking pattern felt robot's responses to be more coherent than those who talked to one robot not using it; therefore, our proposed turn-taking pattern generated a social context for user's flexible interpretation of robot's responses. This result implies a potential of a multiple robots approach for improving the quality of human-robot conversation.
由于会说话的机器人在日常环境中无法避免识别用户语音的错误,它的口头反应有时会被认为与对话的上下文不连贯。本文提出了一种解决这个问题的方法,它生成了一个社会环境,在这个环境中,用户被引导去寻找机器人话语的连贯性,即使它的响应是根据对用户语音的错误识别而产生的。我们设计了一种新颖的轮流模式,在这种模式中,两个机器人根据预先设定的场景行事,从而产生这样的社会环境。两个实验证明,与使用这种轮流模式的机器人交谈的参与者觉得,与不使用这种模式的机器人交谈的参与者相比,机器人的反应更连贯;因此,我们提出的轮流模式为用户对机器人反应的灵活解释创造了一个社会背景。这一结果暗示了多机器人方法提高人机对话质量的潜力。
{"title":"Pre-scheduled Turn-Taking between Robots to Make Conversation Coherent","authors":"T. Iio, Y. Yoshikawa, H. Ishiguro","doi":"10.1145/2974804.2974819","DOIUrl":"https://doi.org/10.1145/2974804.2974819","url":null,"abstract":"Since a talking robot cannot escape from errors in recognizing user's speech in daily environment, its verbal responses are sometimes felt as incoherent with the context of conversation. This paper presents a solution to this problem that generates a social context where a user is guided to find coherency of the robot's utterances, even though its response is produced according to incorrect recognition of user's speech. We designed a novel turn-taking pattern in which two robots behave according to a pre-scheduled scenario to generate such a social context. Two experiments proved that participants who talked to two robots using that turn-taking pattern felt robot's responses to be more coherent than those who talked to one robot not using it; therefore, our proposed turn-taking pattern generated a social context for user's flexible interpretation of robot's responses. This result implies a potential of a multiple robots approach for improving the quality of human-robot conversation.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123449421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Session details: Main Track Session VIII: Interaction Tactics 会议细节:主会场第八部分:互动策略
M. Imai, Yusuhike Kitamura
{"title":"Session details: Main Track Session VIII: Interaction Tactics","authors":"M. Imai, Yusuhike Kitamura","doi":"10.1145/3257130","DOIUrl":"https://doi.org/10.1145/3257130","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129668964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can Children Anthropomorphize Human-shaped Communication Media?: A Pilot Study on Co-sleeping with a Huggable Communication Medium 儿童能否将人形传播媒介拟人化?:与可拥抱的通讯媒介共眠的初步研究
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980519
Junya Nakanishi, H. Sumioka, H. Ishiguro
This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in co-sleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.
本初步研究报告了一项实验,我们在共睡情况下将可拥抱的通信媒体引入白天睡眠。本实验旨在探讨共睡时拟人化传播媒介的拥抱体验如何影响儿童对媒介的拟人化印象。在实验中,幼儿园老师通过一种叫做Hugvie的可拥抱的交流媒体,让两岁或五岁的孩子朗读,让他们入睡,并让孩子们在阅读前后画下Hugvie,以评估他们对Hugvie印象的变化。结果显示了两班学生在睡眠行为和对休格维的印象上的差异。此外,他们还表明,与类似人类的交流媒介共眠,可能会让孩子们进入深度睡眠。
{"title":"Can Children Anthropomorphize Human-shaped Communication Media?: A Pilot Study on Co-sleeping with a Huggable Communication Medium","authors":"Junya Nakanishi, H. Sumioka, H. Ishiguro","doi":"10.1145/2974804.2980519","DOIUrl":"https://doi.org/10.1145/2974804.2980519","url":null,"abstract":"This pilot study reports an experiment where we introduced huggable communication media into daytime sleep in co-sleeping situation. The purpose of the experiment was to investigate whether it would improve soothing child users' sleep and how hugging experience with anthropomorphic communication media affects child's anthropomorphic impression on the media in co-sleeping. In the experiment, nursery teachers read two-year-old or five-year-old children to sleep through huggable communication media called Hugvie and asked the children to draw Hugvie before and after the reading to evaluate changes in their impressions of Hugvie. The results show the difference of sleeping behavior with and the impressions on Hugvie between the two classes. Moreover, they also showed the possibility that co-sleeping with a humanlike communication medium induces children to sleep deeply.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129982735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sharing Emotion Described as Text on the Internet by Changing Self-physiological Perception 通过改变自我生理感知在互联网上分享被描述为文本的情感
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974825
Sho Sakurai, Yuki Ban, Toki Katsumura, Takuji Narumi, T. Tanikawa, M. Hirose
Agents like human, such as humanoid robots or avatars can be felt as if they have and communicate and communicate due to manipulation of the bodily information. Meanwhile, as in the case of Internet bot, it is still difficult to communiate the emotion described as text, let alone empathizing due to degradation of information online. The current study proposes a method for experiencing emotion on the Internet by reproducing a mechanism of evoking emotion. This method evokes a number of emotions described on the Web, by changing of self-physiological perception with sensory stimuli. To investigate the feasibility of our method, we made a system named "Communious Mouse." This system rewrites the perception of self-skin temperature and pulse in a palm by presenting vibration and thermal stimulation through a mouse device for evoking emotion. The current paper discusses the feasibility of our method based on the obtained feedbacks through an exhibition of the system.
像人类这样的代理人,如人形机器人或化身,可以感觉到他们好像拥有并且由于对身体信息的操纵而进行交流和交流。同时,与互联网机器人的情况一样,由于网络信息的退化,作为文本描述的情感仍然难以沟通,更不用说移情了。本研究提出了一种通过再现唤起情感的机制来体验网络情感的方法。这种方法通过感官刺激改变自我生理知觉,唤起网络上描述的多种情绪。为了研究我们方法的可行性,我们制作了一个名为“交流鼠标”的系统。该系统通过唤起情感的鼠标装置,通过振动和热刺激,重新编写手掌对自身皮肤温度和脉搏的感知。本文通过对系统的展示,讨论了该方法的可行性。
{"title":"Sharing Emotion Described as Text on the Internet by Changing Self-physiological Perception","authors":"Sho Sakurai, Yuki Ban, Toki Katsumura, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1145/2974804.2974825","DOIUrl":"https://doi.org/10.1145/2974804.2974825","url":null,"abstract":"Agents like human, such as humanoid robots or avatars can be felt as if they have and communicate and communicate due to manipulation of the bodily information. Meanwhile, as in the case of Internet bot, it is still difficult to communiate the emotion described as text, let alone empathizing due to degradation of information online. The current study proposes a method for experiencing emotion on the Internet by reproducing a mechanism of evoking emotion. This method evokes a number of emotions described on the Web, by changing of self-physiological perception with sensory stimuli. To investigate the feasibility of our method, we made a system named \"Communious Mouse.\" This system rewrites the perception of self-skin temperature and pulse in a palm by presenting vibration and thermal stimulation through a mouse device for evoking emotion. The current paper discusses the feasibility of our method based on the obtained feedbacks through an exhibition of the system.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128248711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Haptic Workspace Control of the Humanoid Robot Arms 仿人机器人手臂的触觉工作空间控制
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980505
Longjiang Zhou, A. H. Adiwahono, Yuanwei Chua, W. L. Chan
This paper presents a haptic workspace control approach to the arms of a humanoid robot by using the Omega 7 haptic device as the control input device. The haptic device with small workspace is used to control the robot with 2 arm end-effectors of large workspace. This paper also puts forward an approach for users to feel the haptic feedback force when the robot end-effectors touch the virtual boundary areas for the safety consideration. The haptic device can move further but the robot arm end-effector will stop and the haptic force generated is proportional to the travel distance of the haptic device end-effectors until reaching the maximum value of force permitted by the designer. Simulation experiments are designed and implemented to test the motion performance of the arm end-effectors under control of haptic device and the generated haptic force when the virtual boundary walls are reached by the arm end-effectors.
本文采用欧米茄7触觉装置作为控制输入装置,提出了一种仿人机器人手臂的触觉工作空间控制方法。采用小工作空间的触觉装置对大工作空间的2臂末端执行器机器人进行控制。出于安全考虑,本文还提出了一种让用户在机器人末端执行器接触虚拟边界区域时感受到触觉反馈力的方法。触觉装置可以继续移动,但机器人手臂末端执行器会停止,产生的触觉力与触觉装置末端执行器的移动距离成正比,直到达到设计者允许的最大力。设计并实现了仿真实验,测试了在触觉装置控制下手臂末端执行器的运动性能以及手臂末端执行器到达虚拟边界壁时产生的触觉力。
{"title":"Haptic Workspace Control of the Humanoid Robot Arms","authors":"Longjiang Zhou, A. H. Adiwahono, Yuanwei Chua, W. L. Chan","doi":"10.1145/2974804.2980505","DOIUrl":"https://doi.org/10.1145/2974804.2980505","url":null,"abstract":"This paper presents a haptic workspace control approach to the arms of a humanoid robot by using the Omega 7 haptic device as the control input device. The haptic device with small workspace is used to control the robot with 2 arm end-effectors of large workspace. This paper also puts forward an approach for users to feel the haptic feedback force when the robot end-effectors touch the virtual boundary areas for the safety consideration. The haptic device can move further but the robot arm end-effector will stop and the haptic force generated is proportional to the travel distance of the haptic device end-effectors until reaching the maximum value of force permitted by the designer. Simulation experiments are designed and implemented to test the motion performance of the arm end-effectors under control of haptic device and the generated haptic force when the virtual boundary walls are reached by the arm end-effectors.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126568464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embodiment of Video-mediated Communication Enhances Social Telepresence 视频媒介通信的体现增强了社交网真
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974826
Yuya Onishi, Kazuaki Tanaka, Hideyuki Nakanishi
There are several merits to embody the remote partner's body to the video conference: showing it physically, making a physical contact, and enhancing social telepresence. In this paper, we tackled to embody a part of a remote partner's body in a video conference. As a method to show how effectively the embodied body part works, we focused on face-to-face communication of the hand gestures such as thumb wrestling, finger number game and pointing. We developed a robotic arm, which seems the remote partner's arm popped out from the video. Our robot arm synchronizes with the remote partner's arm movements. We conducted experiments to verify the method of embodying a part of a remote partner's body. We found that, our method reduced the feeling of being far from the remote partner, and enhanced social telepresence, comparing video and physical embodiment.
将远程合作伙伴的身体体现到视频会议中有几个优点:物理展示,进行物理接触,增强社交远程呈现。在本文中,我们解决了在视频会议中体现远程合作伙伴身体的一部分。作为一种展示身体具体化部分如何有效工作的方法,我们重点研究了拇指摔跤、手指数游戏和指向等手势的面对面交流。我们开发了一个机械臂,就像视频中远程合作伙伴的手臂一样。我们的机械手臂与远程合作伙伴的手臂运动同步。我们进行了实验来验证将远程伴侣身体的一部分具体化的方法。我们发现,我们的方法减少了远离远程伴侣的感觉,并增强了社交远程呈现,比较了视频和物理体现。
{"title":"Embodiment of Video-mediated Communication Enhances Social Telepresence","authors":"Yuya Onishi, Kazuaki Tanaka, Hideyuki Nakanishi","doi":"10.1145/2974804.2974826","DOIUrl":"https://doi.org/10.1145/2974804.2974826","url":null,"abstract":"There are several merits to embody the remote partner's body to the video conference: showing it physically, making a physical contact, and enhancing social telepresence. In this paper, we tackled to embody a part of a remote partner's body in a video conference. As a method to show how effectively the embodied body part works, we focused on face-to-face communication of the hand gestures such as thumb wrestling, finger number game and pointing. We developed a robotic arm, which seems the remote partner's arm popped out from the video. Our robot arm synchronizes with the remote partner's arm movements. We conducted experiments to verify the method of embodying a part of a remote partner's body. We found that, our method reduced the feeling of being far from the remote partner, and enhanced social telepresence, comparing video and physical embodiment.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126161356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Are you talking to me?: Improving the Robustness of Dialogue Systems in a Multi Party HRI Scenario by Incorporating Gaze Direction and Lip Movement of Attendees 你是在跟我说话吗?通过结合参与者的凝视方向和嘴唇运动来提高多方HRI场景中对话系统的鲁棒性
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2974823
Viktor Richter, Birte Carlmeyer, Florian Lier, Sebastian Meyer zu Borgsen, David Schlangen, F. Kummert, S. Wachsmuth, B. Wrede
In this paper, we present our humanoid robot "Meka", participating in a multi party human robot dialogue scenario. Active arbitration of the robot's attention based on multi-modal stimuli is utilised to observe persons which are outside of the robots field of view. We investigate the impact of this attention management and addressee recognition on the robot's capability to distinguish utterances directed at it from communication between humans. Based on the results of a user study, we show that mutual gaze at the end of an utterance, as a means of yielding a turn, is a substantial cue for addressee recognition. Verification of a speaker through the detection of lip movements can be used to further increase precision. Furthermore, we show that even a rather simplistic fusion of gaze and lip movement cues allows a considerable enhancement in addressee estimation, and can be altered to adapt to the requirements of a particular scenario.
在本文中,我们展示了我们的人形机器人“Meka”,参与多方人机对话场景。基于多模态刺激的机器人注意力主动仲裁用于观察机器人视野之外的人。我们研究了这种注意力管理和收件人识别对机器人从人类之间的交流中区分针对它的话语的能力的影响。基于用户研究的结果,我们表明,在话语结束时相互凝视,作为一种让步的手段,是一个重要的线索,以收件人识别。通过检测说话人的嘴唇运动来验证说话人,可以进一步提高精度。此外,我们表明,即使是相当简单的凝视和嘴唇运动线索的融合也可以大大提高对收件人的估计,并且可以改变以适应特定场景的要求。
{"title":"Are you talking to me?: Improving the Robustness of Dialogue Systems in a Multi Party HRI Scenario by Incorporating Gaze Direction and Lip Movement of Attendees","authors":"Viktor Richter, Birte Carlmeyer, Florian Lier, Sebastian Meyer zu Borgsen, David Schlangen, F. Kummert, S. Wachsmuth, B. Wrede","doi":"10.1145/2974804.2974823","DOIUrl":"https://doi.org/10.1145/2974804.2974823","url":null,"abstract":"In this paper, we present our humanoid robot \"Meka\", participating in a multi party human robot dialogue scenario. Active arbitration of the robot's attention based on multi-modal stimuli is utilised to observe persons which are outside of the robots field of view. We investigate the impact of this attention management and addressee recognition on the robot's capability to distinguish utterances directed at it from communication between humans. Based on the results of a user study, we show that mutual gaze at the end of an utterance, as a means of yielding a turn, is a substantial cue for addressee recognition. Verification of a speaker through the detection of lip movements can be used to further increase precision. Furthermore, we show that even a rather simplistic fusion of gaze and lip movement cues allows a considerable enhancement in addressee estimation, and can be altered to adapt to the requirements of a particular scenario.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127491311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Promoting Physical Activities by Massive Competition in Virtual Marathon 通过虚拟马拉松的大规模竞赛促进体育活动
Pub Date : 2016-10-04 DOI: 10.1145/2974804.2980483
Yuya Nakanishi, Y. Kitamura
Overweight and obesity due to lack of physical activities incur a serious social problem and a number of systems to promote physical activities using information and communication technologies have been developed. Virtual Kobe Marathon is an Android app to make a user experience a marathon race virtually. It shows the Kobe Marathon course on its display and moves an agent along the course according to the user's moving distance. It also has a competition scheme to make a user compete virtually with others running at different places and times. This scheme facilitates competitions with a small number of opponents, and we, in this paper, introduce a massive competition scheme utilizing the record of 17,769 runners who participated in the 3rd Kobe Marathon. The evaluation experiment shows the massive competition scheme promotes physical activities more than the one-to-one competition scheme.
由于缺乏体育活动而导致的超重和肥胖引起了严重的社会问题,已经开发了一些利用信息和通信技术促进体育活动的系统。虚拟神户马拉松(Virtual Kobe Marathon)是一款让用户虚拟体验马拉松比赛的Android应用。它会在显示屏上显示神户马拉松的路线,并根据用户的移动距离在路线上移动代理。它还有一个竞争计划,让用户与在不同地点和时间跑步的其他人进行虚拟竞争。该方案有利于少数对手的比赛,而我们在本文中引入了一个大规模的比赛方案,利用了参加第三届神户马拉松比赛的17,769名选手的记录。评价实验表明,大规模竞争方案比一对一竞争方案更能促进体育活动。
{"title":"Promoting Physical Activities by Massive Competition in Virtual Marathon","authors":"Yuya Nakanishi, Y. Kitamura","doi":"10.1145/2974804.2980483","DOIUrl":"https://doi.org/10.1145/2974804.2980483","url":null,"abstract":"Overweight and obesity due to lack of physical activities incur a serious social problem and a number of systems to promote physical activities using information and communication technologies have been developed. Virtual Kobe Marathon is an Android app to make a user experience a marathon race virtually. It shows the Kobe Marathon course on its display and moves an agent along the course according to the user's moving distance. It also has a competition scheme to make a user compete virtually with others running at different places and times. This scheme facilitates competitions with a small number of opponents, and we, in this paper, introduce a massive competition scheme utilizing the record of 17,769 runners who participated in the 3rd Kobe Marathon. The evaluation experiment shows the massive competition scheme promotes physical activities more than the one-to-one competition scheme.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133096747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the Fourth International Conference on Human Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1