首页 > 最新文献

Proceedings of the 3rd International Conference on Human-Agent Interaction最新文献

英文 中文
Using Video Preferences to Understand the Human Perception of Real and Fictional Robots 使用视频偏好来理解人类对真实和虚构机器人的感知
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814958
Omar Mubin, M. Obaid, E. B. Sandoval, M. Fjeld
In this paper the nexus between fictional and real robots in Human Robot Interaction (HRI) is explored. We claim that design guidelines for HRI must not be borrowed from fiction blindly, as contradictions between the two may emerge with respect to what is desired by the human user. To understand human perception of robots appearing in movies we analyse viewing statistics and qualitative comments of a set of YouTube videos comprising of fictional and real robots. Analysis of the viewing statistics showed that real robots are more popular. Furthermore, analysis of the comments showed that two real robots (Nao and Shakey) generated significantly more positive comments and significantly more attributions of usage in human society as compared to the two fictional robots (AstroBoy and HAL9000). Based on the sample of robots considered in this research, our results reveal that contrary to expectation humans are more exposed to real robots and are more preferred, and we conclude by reasserting the contradiction that emerges between real and fictional robots.
本文探讨了人机交互(HRI)中虚拟机器人与真实机器人之间的联系。我们认为,HRI的设计指南不能盲目地从小说中借鉴,因为两者之间的矛盾可能会在人类用户的期望方面出现。为了理解人类对出现在电影中的机器人的感知,我们分析了一组由虚构和真实机器人组成的YouTube视频的观看统计和定性评论。对观看数据的分析表明,真正的机器人更受欢迎。此外,对评论的分析表明,与两个虚构的机器人(阿童木和HAL9000)相比,两个真实的机器人(Nao和Shakey)产生了更多的积极评论,并在人类社会中使用了更多的属性。基于本研究中考虑的机器人样本,我们的研究结果显示,与预期相反,人类更多地接触到真实的机器人,并且更受青睐,我们通过重申真实机器人和虚构机器人之间出现的矛盾来得出结论。
{"title":"Using Video Preferences to Understand the Human Perception of Real and Fictional Robots","authors":"Omar Mubin, M. Obaid, E. B. Sandoval, M. Fjeld","doi":"10.1145/2814940.2814958","DOIUrl":"https://doi.org/10.1145/2814940.2814958","url":null,"abstract":"In this paper the nexus between fictional and real robots in Human Robot Interaction (HRI) is explored. We claim that design guidelines for HRI must not be borrowed from fiction blindly, as contradictions between the two may emerge with respect to what is desired by the human user. To understand human perception of robots appearing in movies we analyse viewing statistics and qualitative comments of a set of YouTube videos comprising of fictional and real robots. Analysis of the viewing statistics showed that real robots are more popular. Furthermore, analysis of the comments showed that two real robots (Nao and Shakey) generated significantly more positive comments and significantly more attributions of usage in human society as compared to the two fictional robots (AstroBoy and HAL9000). Based on the sample of robots considered in this research, our results reveal that contrary to expectation humans are more exposed to real robots and are more preferred, and we conclude by reasserting the contradiction that emerges between real and fictional robots.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128457527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Facial Expression Training System using Bilinear Shape Model 基于双线性形状模型的面部表情训练系统
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814985
Byung-Hwa Park, Se-Young Oh
We introduce a facial expression training system using the bilinear shape model which helps people to practice making a facial expression. The user face on the camera preview screen is reconstructed into a 3D face model and the model is transformed to blend shape model which represents the facial expression. This way, the system can precisely analyze the facial expression of the user. With target 3D face model appearing on the screen, the 3D face model changes its facial expression, it leads the user to change his facial expression to become look like same. The system recognizes whether the facial expression of the user is same with the one of 3D face model. As the system gives the various missions to user to change his facial expression, user can practice the facial expression. It can be used for bell's palsy patient who needs face rehabilitation exercise or someone who need to practice unique facial expression such as stewardess smile or facial mimicry.
介绍了一种基于双线性形状模型的面部表情训练系统,帮助人们练习面部表情。将摄像头预览屏上的用户面部重构为三维人脸模型,并将该模型转化为代表面部表情的混合形状模型。这样,系统就可以精确地分析用户的面部表情。随着目标3D人脸模型出现在屏幕上,3D人脸模型改变其面部表情,引导用户改变其面部表情,使其看起来相似。该系统识别用户的面部表情是否与三维人脸模型的面部表情相同。当系统给用户各种任务来改变他的面部表情时,用户可以练习面部表情。它可以用于需要面部康复锻炼的贝尔氏麻痹患者或需要练习独特面部表情的人,如空姐微笑或面部模仿。
{"title":"Facial Expression Training System using Bilinear Shape Model","authors":"Byung-Hwa Park, Se-Young Oh","doi":"10.1145/2814940.2814985","DOIUrl":"https://doi.org/10.1145/2814940.2814985","url":null,"abstract":"We introduce a facial expression training system using the bilinear shape model which helps people to practice making a facial expression. The user face on the camera preview screen is reconstructed into a 3D face model and the model is transformed to blend shape model which represents the facial expression. This way, the system can precisely analyze the facial expression of the user. With target 3D face model appearing on the screen, the 3D face model changes its facial expression, it leads the user to change his facial expression to become look like same. The system recognizes whether the facial expression of the user is same with the one of 3D face model. As the system gives the various missions to user to change his facial expression, user can practice the facial expression. It can be used for bell's palsy patient who needs face rehabilitation exercise or someone who need to practice unique facial expression such as stewardess smile or facial mimicry.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130007229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real Time Hand Gesture Recognition Using Random Forest and Linear Discriminant Analysis 基于随机森林和线性判别分析的实时手势识别
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814997
O. Sangjun, R. Mallipeddi, Minho Lee
This paper presents a real-time hand gesture detection and recognition method. Proposed method consists of three steps - detection, validation and recognition. In the detection stage, several areas, estimated to contain hand shapes are detected by random forest hand detector over the whole image. The next steps are validation and recognition stages. In order to check whether each area contains hand or not, we used Linear Discriminant Analysis. The proposed work is based on the assumption that samples with similar posture are distributed near each other in high dimensional space. So, training data used for random forest are also analyzed in three dimensional space. In the reduced dimensional space, we can determine decision conditions for validation and classification. After detecting exact area of hand, we need to search for hand just in the nearby area. It reduces processing time for hand detection process.
本文提出了一种实时的手势检测与识别方法。该方法包括检测、验证和识别三个步骤。在检测阶段,随机森林手部检测器在整个图像上检测估计包含手部形状的几个区域。接下来的步骤是验证和识别阶段。为了检查每个区域是否有手,我们使用了线性判别分析。所提出的工作是基于假设具有相似姿态的样本在高维空间中彼此靠近分布。因此,随机森林的训练数据也是在三维空间中进行分析的。在降维空间中,我们可以确定验证和分类的决策条件。在检测到手的确切区域后,我们需要在附近的区域搜索手。它减少了手工检测过程的处理时间。
{"title":"Real Time Hand Gesture Recognition Using Random Forest and Linear Discriminant Analysis","authors":"O. Sangjun, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814997","DOIUrl":"https://doi.org/10.1145/2814940.2814997","url":null,"abstract":"This paper presents a real-time hand gesture detection and recognition method. Proposed method consists of three steps - detection, validation and recognition. In the detection stage, several areas, estimated to contain hand shapes are detected by random forest hand detector over the whole image. The next steps are validation and recognition stages. In order to check whether each area contains hand or not, we used Linear Discriminant Analysis. The proposed work is based on the assumption that samples with similar posture are distributed near each other in high dimensional space. So, training data used for random forest are also analyzed in three dimensional space. In the reduced dimensional space, we can determine decision conditions for validation and classification. After detecting exact area of hand, we need to search for hand just in the nearby area. It reduces processing time for hand detection process.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Building Pedagogical Relationships Between Humans and Robots in Natural Interactions 在自然互动中建立人与机器人之间的教学关系
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814941
Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, M. Imai
The purpose of our study is to investigate human teaching behavior and robot learning behavior when a human teaches a robot. Agents for learning support need to build a pedagogical relationship, in which a teacher agent and a student agent change their behaviors as they recognize the other's characteristic behaviors. In order to investigate how a robot that behaves as a student should respond to humans' teaching behaviors in a pedagogical relationship between human and robot, we conducted a case study using a game played on a tablet with a robot. In the case study, we analyzed how humans changed their teaching behaviors when the humanoid robot failed to understand what they taught. From the results of this case study, we observed that some subjects carefully taught the robot in each trial in order to allow the robot to understand the subjects. Moreover, we also observed that subjects' teaching behavior changed when the subject received feedback from the robot about the teaching.
我们研究的目的是研究当人类教机器人时,人类的教学行为和机器人的学习行为。学习支持代理需要建立一种教学关系,在这种关系中,教师代理和学生代理在认识到对方的特征行为时改变自己的行为。为了研究在人与机器人的教学关系中,作为学生的机器人应该如何回应人类的教学行为,我们使用机器人在平板电脑上玩的游戏进行了案例研究。在案例研究中,我们分析了当人形机器人无法理解他们所教的内容时,人类如何改变他们的教学行为。从这个案例研究的结果中,我们观察到一些受试者在每次试验中都认真地教机器人,以便让机器人理解受试者。此外,我们还观察到,当受试者收到机器人关于教学的反馈时,受试者的教学行为发生了变化。
{"title":"Building Pedagogical Relationships Between Humans and Robots in Natural Interactions","authors":"Hirofumi Okazaki, Yusuke Kanai, Masa Ogata, Komei Hasegawa, Kentaro Ishii, M. Imai","doi":"10.1145/2814940.2814941","DOIUrl":"https://doi.org/10.1145/2814940.2814941","url":null,"abstract":"The purpose of our study is to investigate human teaching behavior and robot learning behavior when a human teaches a robot. Agents for learning support need to build a pedagogical relationship, in which a teacher agent and a student agent change their behaviors as they recognize the other's characteristic behaviors. In order to investigate how a robot that behaves as a student should respond to humans' teaching behaviors in a pedagogical relationship between human and robot, we conducted a case study using a game played on a tablet with a robot. In the case study, we analyzed how humans changed their teaching behaviors when the humanoid robot failed to understand what they taught. From the results of this case study, we observed that some subjects carefully taught the robot in each trial in order to allow the robot to understand the subjects. Moreover, we also observed that subjects' teaching behavior changed when the subject received feedback from the robot about the teaching.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of Doorlock System Using Face Recognition 基于人脸识别的门锁系统实现
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814969
Jaejoon Hwang, Yoojin Nam, Sangheon Lee, Gil-Jin Jang
This paper proposes a system that implements a computerized doorlock system using face recognition on photographic images captured by digital cameras. The doorlock is equipped with a simple raspberry pi board whose functions are capturing the image of a user who claims to be a valid user, and the image is then transferred to a server, where a face recognition is carried out to decide the claimed user is enrolled or not. Unlike other doorlock systems that are based on password numbers, the proposed system deprives the need for inputting passcodes. The proposed system can be used with many doorlock systems.
本文提出了一种对数码相机拍摄的图像进行人脸识别的计算机门锁系统。门锁上装有一个简单的树莓派板,其功能是捕获声称是有效用户的用户的图像,然后将图像传输到服务器,在服务器上进行面部识别以确定所声称的用户是否注册。与其他基于密码的门锁系统不同,该系统不需要输入密码。该系统可用于多种门锁系统。
{"title":"Implementation of Doorlock System Using Face Recognition","authors":"Jaejoon Hwang, Yoojin Nam, Sangheon Lee, Gil-Jin Jang","doi":"10.1145/2814940.2814969","DOIUrl":"https://doi.org/10.1145/2814940.2814969","url":null,"abstract":"This paper proposes a system that implements a computerized doorlock system using face recognition on photographic images captured by digital cameras. The doorlock is equipped with a simple raspberry pi board whose functions are capturing the image of a user who claims to be a valid user, and the image is then transferred to a server, where a face recognition is carried out to decide the claimed user is enrolled or not. Unlike other doorlock systems that are based on password numbers, the proposed system deprives the need for inputting passcodes. The proposed system can be used with many doorlock systems.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keep Your Chin Up When You Want to Believe in Future Rewards: The Effect of Facial Direction on Discount Factors 当你想要相信未来的奖励时,请抬起你的下巴:面部方向对折扣因素的影响
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814973
Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Yusuke Suetsugu, Eri Takashima, Toshimasa Takai, Misa Yoshizaki, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka
Studies have shown that a person's current body state can affect his thinking (embodied cognition). We tested how value judgments can be changed by this. The results of our experiment demonstrated that participants tended to discount a future reward less when looking up than when looking down. Moreover, we found that the β parameter, which represents the value of immediate rewards relative to delayed rewards received at another point in time, significantly differed between the two conditions; whereas the δ parameter, the discount rate in the standard exponential formula, did not show a significant difference. Using functional magnetic resonance imaging, McClure et al. (2004) showed that β is mediated by the lower level, automatic processes of the limbic structures; δ is mediated by the lateral prefrontal cortex, supporting higher cognitive functions. Combining the above two results, we can conclude that the embodied cognition in our experiment was mainly produced by the lower level brain processes. We believe that the knowing that discount factor β can be controlled by posture can be applied when designing robot behavior, such as encouraging a diet, trying to sell insurance, or offering customers a card loan.
研究表明,一个人当前的身体状态会影响他的思维(具身认知)。我们测试了价值判断是如何因此而改变的。我们的实验结果表明,参与者在向上看时比向下看时更倾向于低估未来的奖励。此外,我们发现,β参数(表示即时奖励相对于在另一个时间点获得的延迟奖励的值)在两种情况下存在显著差异;而标准指数公式中的折现率δ参数差异不显著。麦克卢尔等人(2004)利用功能性磁共振成像技术表明,β是由大脑边缘结构的低级自动过程介导的;δ由外侧前额皮质调节,支持高级认知功能。结合以上两个结果,我们可以得出结论,我们实验中的具身认知主要是由较低层次的脑加工产生的。我们相信,知道折扣因子β可以由姿势控制,可以应用于设计机器人的行为,如鼓励节食,试图出售保险,或向客户提供信用卡贷款。
{"title":"Keep Your Chin Up When You Want to Believe in Future Rewards: The Effect of Facial Direction on Discount Factors","authors":"Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Yusuke Suetsugu, Eri Takashima, Toshimasa Takai, Misa Yoshizaki, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka","doi":"10.1145/2814940.2814973","DOIUrl":"https://doi.org/10.1145/2814940.2814973","url":null,"abstract":"Studies have shown that a person's current body state can affect his thinking (embodied cognition). We tested how value judgments can be changed by this. The results of our experiment demonstrated that participants tended to discount a future reward less when looking up than when looking down. Moreover, we found that the β parameter, which represents the value of immediate rewards relative to delayed rewards received at another point in time, significantly differed between the two conditions; whereas the δ parameter, the discount rate in the standard exponential formula, did not show a significant difference. Using functional magnetic resonance imaging, McClure et al. (2004) showed that β is mediated by the lower level, automatic processes of the limbic structures; δ is mediated by the lateral prefrontal cortex, supporting higher cognitive functions. Combining the above two results, we can conclude that the embodied cognition in our experiment was mainly produced by the lower level brain processes. We believe that the knowing that discount factor β can be controlled by posture can be applied when designing robot behavior, such as encouraging a diet, trying to sell insurance, or offering customers a card loan.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129218033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implicit Shopping Intention Recognition with Eye Tracking Data and Response Time 基于眼动追踪数据和反应时间的内隐购物意向识别
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815001
Dong-Gun Lee, Kyeongho Lee, Soo-Young Lee
Implicit intention is the intention that is not expressed externally but having in one's mind. Implicit intention is difficult to be recognized, but it can be significant information if it is recognized with some measures. When people buy something, they also have implicit intention in their mind, whether I buy this or not. We proposed an experimental paradigm to recognize shopper's implicit intention, and the result of experiment was analyzed in this paper. On the experiment, subjects were instructed to select items to buy from the candidates, and eye-tracking and speech data were recorded during the selection. On data analysis, measures discriminating the existence of implicit shopping intention were selected and compared. From the result, fixation duration, fixation count, multiplication of first fixation duration, and visit count showed different tendency between two cases: when people have intention to buy it and when people do not have. By using this standards, implicit shopping intention of people can be recognized.
隐性意图是指没有外在表达而存在于内心的意图。内隐意图很难被识别,但如果通过一些措施识别,它可以成为重要的信息。当人们买东西的时候,他们心里也有内隐的意图,不管我买不买。本文提出了一种识别购物者内隐意图的实验范式,并对实验结果进行了分析。在实验中,受试者被指示从候选人中选择要购买的物品,并在选择过程中记录眼球追踪和语音数据。在数据分析方面,选择并比较了判别内隐购物意向存在的测度。从结果来看,在有购买意向和无购买意向两种情况下,注视时间、注视次数、首次注视时间的乘积和访问次数呈现出不同的趋势。通过使用这一标准,可以识别人们的隐性购物意图。
{"title":"Implicit Shopping Intention Recognition with Eye Tracking Data and Response Time","authors":"Dong-Gun Lee, Kyeongho Lee, Soo-Young Lee","doi":"10.1145/2814940.2815001","DOIUrl":"https://doi.org/10.1145/2814940.2815001","url":null,"abstract":"Implicit intention is the intention that is not expressed externally but having in one's mind. Implicit intention is difficult to be recognized, but it can be significant information if it is recognized with some measures. When people buy something, they also have implicit intention in their mind, whether I buy this or not. We proposed an experimental paradigm to recognize shopper's implicit intention, and the result of experiment was analyzed in this paper. On the experiment, subjects were instructed to select items to buy from the candidates, and eye-tracking and speech data were recorded during the selection. On data analysis, measures discriminating the existence of implicit shopping intention were selected and compared. From the result, fixation duration, fixation count, multiplication of first fixation duration, and visit count showed different tendency between two cases: when people have intention to buy it and when people do not have. By using this standards, implicit shopping intention of people can be recognized.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Social Appearance of Virtual Agent and Temporal Contingency Effect 虚拟代理的社会表象与时间权变效应
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814961
Hanju Lee, Yasuhiro Kanakogi, K. Hiraki
In our previous study, we developed Pedagogical Agent with Gaze Interaction (PAGI), an anthropomorphic animated pedagogical agent that engages in gaze interaction with students. Using PAGI, we revealed that temporal contingency from virtual agents facilitate learning (temporal contingency effect), and proposed two hypotheses that may explain the result; 1) temporal contingency reduces extraneous cognitive load related to visual search, 2) temporal contingency prime social stance in learners which enhances learning. To assess more deeply into this matter, we tested two critical features of the agent, saliency and socialness. Two arrow shaped agents, of which differed in saliency, were employed. Apart from the appearance of the agents, the experimental design was identical to the previous study. University students learned words of a foreign language, with temporally contingent agent or recorded version of the agent, which played pre-recorded sessions from the contingent agents. From the result we gained evidence supporting the second hypothesis. Non-social agents did not trigger temporal contingency effect.
在我们之前的研究中,我们开发了具有凝视互动的教学代理(PAGI),这是一个拟人化的动画教学代理,与学生进行凝视互动。利用PAGI,我们揭示了虚拟代理的时间偶然性促进了学习(时间偶然性效应),并提出了两个可以解释这一结果的假设;1)时间偶然性减少了与视觉搜索相关的外部认知负荷;2)时间偶然性启动了学习者的社会姿态,提高了学习效果。为了更深入地评估这个问题,我们测试了代理的两个关键特征,显著性和社会性。采用了两种显著性不同的箭头状药剂。除了药剂的外观外,实验设计与之前的研究相同。大学生学习一门外语单词,使用临时偶然代理或录制版本的代理,播放预先录制的偶然代理会话。从这个结果中我们得到了支持第二个假设的证据。非社会行为人没有触发时间权变效应。
{"title":"Social Appearance of Virtual Agent and Temporal Contingency Effect","authors":"Hanju Lee, Yasuhiro Kanakogi, K. Hiraki","doi":"10.1145/2814940.2814961","DOIUrl":"https://doi.org/10.1145/2814940.2814961","url":null,"abstract":"In our previous study, we developed Pedagogical Agent with Gaze Interaction (PAGI), an anthropomorphic animated pedagogical agent that engages in gaze interaction with students. Using PAGI, we revealed that temporal contingency from virtual agents facilitate learning (temporal contingency effect), and proposed two hypotheses that may explain the result; 1) temporal contingency reduces extraneous cognitive load related to visual search, 2) temporal contingency prime social stance in learners which enhances learning. To assess more deeply into this matter, we tested two critical features of the agent, saliency and socialness. Two arrow shaped agents, of which differed in saliency, were employed. Apart from the appearance of the agents, the experimental design was identical to the previous study. University students learned words of a foreign language, with temporally contingent agent or recorded version of the agent, which played pre-recorded sessions from the contingent agents. From the result we gained evidence supporting the second hypothesis. Non-social agents did not trigger temporal contingency effect.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115271055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Time Delay Effect on Social Interaction Dynamics 时间延迟效应对社会互动动力学的影响
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814979
H. Iizuka, S. Saitoh, D. Marocco, Masahito Yamamoto
This paper investigates time-delay effects of the human social interaction to understand how human can adapt to the time delay, which will be required in software agents to establish a harmonic interaction with human. We performed the minimal experiments of social interaction called perceptual crossing experiments with time delay. Our result shows that the social interaction breaks down when the total amount of time delay is given more than about one second. However, the interaction breaks more easily when the time delay is given to both participants than to either participant.
本文研究了人类社会互动的时滞效应,以了解人类如何适应时间延迟,这将是软件代理与人类建立和谐互动所必需的。我们进行了最小的社会互动实验,称为时间延迟的感知交叉实验。我们的研究结果表明,当总延迟时间超过一秒时,社交互动就会中断。然而,当时间延迟给两个参与者比给任何一个参与者时,交互更容易中断。
{"title":"Time Delay Effect on Social Interaction Dynamics","authors":"H. Iizuka, S. Saitoh, D. Marocco, Masahito Yamamoto","doi":"10.1145/2814940.2814979","DOIUrl":"https://doi.org/10.1145/2814940.2814979","url":null,"abstract":"This paper investigates time-delay effects of the human social interaction to understand how human can adapt to the time delay, which will be required in software agents to establish a harmonic interaction with human. We performed the minimal experiments of social interaction called perceptual crossing experiments with time delay. Our result shows that the social interaction breaks down when the total amount of time delay is given more than about one second. However, the interaction breaks more easily when the time delay is given to both participants than to either participant.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Women and Men Collaborating with Robots on Assembly Lines: Designing a Novel Evaluation Scenario for Collocated Human-Robot Teamwork 在装配线上与机器人合作的男女:设计一种新的人-机器人协同工作的评估方案
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814948
S. Seo, Jihyang Gu, Seongmi Jeong, Keelin Griffin, J. Young, Andrea Bunt, S. Prentice
This paper presents an original scenario design specifically created for exploring gender-related issues surrounding collaborative human-robot teams on assembly lines. Our methodology is grounded squarely in the need for increased gender work in human-robot interaction. As with most research in social human-robot interaction, investigating and exploring gender issues relies heavily on an evaluation methodology and scenario that aims to maximize ecological validity, so that the lab results can generalize to a real-world social scenario. In this paper, we present our discussion on study elements required for ecological validity in our context, present an original study design that meets these criteria, and present initial pilot results that reflect on our approach and study design.
本文提出了一个原创的场景设计,专门用于探索围绕装配线上协作的人机团队的性别相关问题。我们的方法完全是基于在人机交互中增加性别工作的需要。与大多数社会人机交互研究一样,调查和探索性别问题在很大程度上依赖于旨在最大化生态有效性的评估方法和场景,以便实验室结果可以推广到现实世界的社会场景。在本文中,我们讨论了在我们的背景下生态效度所需的研究元素,提出了符合这些标准的原始研究设计,并提出了反映我们的方法和研究设计的初步试点结果。
{"title":"Women and Men Collaborating with Robots on Assembly Lines: Designing a Novel Evaluation Scenario for Collocated Human-Robot Teamwork","authors":"S. Seo, Jihyang Gu, Seongmi Jeong, Keelin Griffin, J. Young, Andrea Bunt, S. Prentice","doi":"10.1145/2814940.2814948","DOIUrl":"https://doi.org/10.1145/2814940.2814948","url":null,"abstract":"This paper presents an original scenario design specifically created for exploring gender-related issues surrounding collaborative human-robot teams on assembly lines. Our methodology is grounded squarely in the need for increased gender work in human-robot interaction. As with most research in social human-robot interaction, investigating and exploring gender issues relies heavily on an evaluation methodology and scenario that aims to maximize ecological validity, so that the lab results can generalize to a real-world social scenario. In this paper, we present our discussion on study elements required for ecological validity in our context, present an original study design that meets these criteria, and present initial pilot results that reflect on our approach and study design.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127955526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Proceedings of the 3rd International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1