首页 > 最新文献

Proceedings of the 3rd International Conference on Human-Agent Interaction最新文献

英文 中文
"Hi, It's Me Again!": Virtual Coaches over Mobile Video “嗨,又是我!”:移动视频上的虚拟教练
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814970
Sin-Hwa Kang, D. Krum, Thai-Binh Phan, M. Bolas
We believe that virtual humans presented over video chat services, such as Skype via smartphones, can be an effective way to deliver innovative applications where social interactions are important, such as counseling and coaching. We hypothesize that the context of a smartphone communication channel, i.e. how a virtual human is presented within a smartphone app, and indeed, the nature of that app, can profoundly affect how a real human perceives the virtual human. We have built an apparatus that allows virtual humans to initiate, receive, and interact over video calls using Skype or any similar service. With this platform, we are examining effective designs and social implications of virtual humans that interact over mobile video. The current study examines a relationship involving repeated counseling-style interactions with a virtual human, leveraging the virtual human's ability to call and interact with a real human on multiple occasions over a period of time. The results and implications of this preliminary study suggest that repeated interactions may improve perceived social characteristics of the virtual human.
我们相信,通过视频聊天服务(例如通过智能手机进行的Skype)呈现的虚拟人可以成为提供创新应用程序的有效方式,在这些应用程序中,社交互动非常重要,例如咨询和指导。我们假设智能手机通信渠道的背景,即虚拟人在智能手机应用程序中的呈现方式,以及该应用程序的性质,可以深刻影响真人对虚拟人的看法。我们已经建立了一个设备,允许虚拟人发起,接收,并通过视频通话使用Skype或任何类似的服务进行交互。有了这个平台,我们正在研究通过移动视频互动的虚拟人的有效设计和社会影响。目前的研究考察了一种关系,包括与虚拟人进行反复的咨询式互动,利用虚拟人在一段时间内多次与真人通话和互动的能力。这项初步研究的结果和意义表明,重复的互动可能会改善虚拟人的感知社会特征。
{"title":"\"Hi, It's Me Again!\": Virtual Coaches over Mobile Video","authors":"Sin-Hwa Kang, D. Krum, Thai-Binh Phan, M. Bolas","doi":"10.1145/2814940.2814970","DOIUrl":"https://doi.org/10.1145/2814940.2814970","url":null,"abstract":"We believe that virtual humans presented over video chat services, such as Skype via smartphones, can be an effective way to deliver innovative applications where social interactions are important, such as counseling and coaching. We hypothesize that the context of a smartphone communication channel, i.e. how a virtual human is presented within a smartphone app, and indeed, the nature of that app, can profoundly affect how a real human perceives the virtual human. We have built an apparatus that allows virtual humans to initiate, receive, and interact over video calls using Skype or any similar service. With this platform, we are examining effective designs and social implications of virtual humans that interact over mobile video. The current study examines a relationship involving repeated counseling-style interactions with a virtual human, leveraging the virtual human's ability to call and interact with a real human on multiple occasions over a period of time. The results and implications of this preliminary study suggest that repeated interactions may improve perceived social characteristics of the virtual human.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shared Presence and Collaboration Using a Co-Located Humanoid Robot 共享存在和协作使用一个共同定位的人形机器人
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814995
Johann Wentzel, Daniel J. Rea, J. Young, E. Sharlin
This work proposes the concept of shared presence, where we enable a user to "become" a co-located humanoid robot while still being able to use their real body to complete tasks. The user controls the robot and sees with its vision and sensors, while still maintaining awareness and use of their real body for tasks other than controlling the robot. This shared presence can be used to accomplish tasks that are difficult for one person alone, for example, a robot manipulating a circuit board for easier soldering by the user, lifting and manipulating heavy or unwieldy objects together, or generally having the robot conduct and complete secondary tasks while the user focuses on the primary tasks. If people are able to overcome the cognitive difficulty of maintaining presence for both themselves and a nearby remote entity, tasks that typically require the use of two people could simply require one person assisted by a humanoid robot that they control. In this work, we explore some of the challenges of creating such a system, propose research questions for shared presence, and present our initial implementation that can enable shared presence. We believe shared presence opens up a new research direction that can be applied to many fields, including manufacturing, home-assistant robotics, and education.
这项工作提出了共享存在的概念,我们使用户“成为”一个共同定位的人形机器人,同时仍然能够使用他们的真实身体来完成任务。用户控制机器人,用它的视觉和传感器来观察,同时仍然保持意识,并使用他们的真实身体来完成任务,而不是控制机器人。这种共享的存在可以用来完成一个人难以完成的任务,例如,机器人操纵电路板以便用户更容易焊接,一起举起和操纵沉重或笨重的物体,或者通常让机器人执行并完成次要任务,而用户则专注于主要任务。如果人们能够克服认知困难,同时保持自己和附近的远程实体的存在,那么通常需要两个人完成的任务可能只需要一个人在他们控制的人形机器人的协助下完成。在这项工作中,我们探索了创建这样一个系统的一些挑战,提出了共享在场的研究问题,并展示了我们可以实现共享在场的初步实现。我们相信共享在场为许多领域开辟了一个新的研究方向,包括制造业、家庭助理机器人和教育。
{"title":"Shared Presence and Collaboration Using a Co-Located Humanoid Robot","authors":"Johann Wentzel, Daniel J. Rea, J. Young, E. Sharlin","doi":"10.1145/2814940.2814995","DOIUrl":"https://doi.org/10.1145/2814940.2814995","url":null,"abstract":"This work proposes the concept of shared presence, where we enable a user to \"become\" a co-located humanoid robot while still being able to use their real body to complete tasks. The user controls the robot and sees with its vision and sensors, while still maintaining awareness and use of their real body for tasks other than controlling the robot. This shared presence can be used to accomplish tasks that are difficult for one person alone, for example, a robot manipulating a circuit board for easier soldering by the user, lifting and manipulating heavy or unwieldy objects together, or generally having the robot conduct and complete secondary tasks while the user focuses on the primary tasks. If people are able to overcome the cognitive difficulty of maintaining presence for both themselves and a nearby remote entity, tasks that typically require the use of two people could simply require one person assisted by a humanoid robot that they control. In this work, we explore some of the challenges of creating such a system, propose research questions for shared presence, and present our initial implementation that can enable shared presence. We believe shared presence opens up a new research direction that can be applied to many fields, including manufacturing, home-assistant robotics, and education.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134167579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Food Image Recognition by Using Bag-of-SURF Features and HOG Features 基于Bag-of-SURF特征和HOG特征的食品图像识别
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814968
Almarzooqi Ahmed, T. Ozeki
Due to food culture, religion, allergy and food intolerance we have to find a good system to help us recognize our food. In this paper, we propose methods to recognize food and to show the ingredients using a bag-of-features (BoF) based on SURF detection features. We also propose bag of SURF features and bag-of HOG Features at the same time with the SURF feature detection to recognize the food items. In the experiment, we have achieved up to 72% of accuracy rate on a small food image dataset of 10 categories. Our experiments show that the proposed representation is significantly accurate at identifying food in the existing methods. Moreover, the enhancement of the visual dataset with more images will improve the accuracy rates, especially for the classes with high diversity.
由于饮食文化、宗教、过敏和食物不耐受,我们必须找到一个好的系统来帮助我们识别我们的食物。在本文中,我们提出了基于SURF检测特征的特征袋(BoF)来识别食物和显示成分的方法。在SURF特征检测的同时,我们还提出了bag of SURF特征和bag-of HOG特征来识别食品。在实验中,我们在一个包含10个类别的小型食品图像数据集上实现了高达72%的准确率。我们的实验表明,在现有的方法中,所提出的表示在识别食物方面是非常准确的。此外,使用更多的图像增强视觉数据集将提高准确率,特别是对于具有高多样性的类别。
{"title":"Food Image Recognition by Using Bag-of-SURF Features and HOG Features","authors":"Almarzooqi Ahmed, T. Ozeki","doi":"10.1145/2814940.2814968","DOIUrl":"https://doi.org/10.1145/2814940.2814968","url":null,"abstract":"Due to food culture, religion, allergy and food intolerance we have to find a good system to help us recognize our food. In this paper, we propose methods to recognize food and to show the ingredients using a bag-of-features (BoF) based on SURF detection features. We also propose bag of SURF features and bag-of HOG Features at the same time with the SURF feature detection to recognize the food items. In the experiment, we have achieved up to 72% of accuracy rate on a small food image dataset of 10 categories. Our experiments show that the proposed representation is significantly accurate at identifying food in the existing methods. Moreover, the enhancement of the visual dataset with more images will improve the accuracy rates, especially for the classes with high diversity.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133060258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Effects of Behavioral Complexity on Intention Attribution to Robots 行为复杂性对机器人意图归因的影响
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814949
Yuto Imamura, K. Terada, Hideyuki Takahashi
Researchers in artificial intelligence and robotics have long debated whether robots are capable of possessing minds. We hypothesize that the mind is an abstract internal representation of an agent's input-output relationships, acquired through evolution to interact with others in a non-zero-sum game environment. Attributing mental states to others, based on their complex behaviors, enables an agent to understand another agent's current behavior and predict its future behavior. Therefore, behavioral complexity, i.e., complex sensory input and motor output, might be an essential cue in attributing abstract mental states to others. To test this theory, we conducted experiments in which participants were asked to control a robot that exhibits either simple or complex input-output relationships in its behavior to achieve goals by pushing a button switch on a remote control device. We then measured participants' subjective impressions of the robot after a sudden change in the mapping between the button switch and motor output during the goal-oriented task. The results indicate that the complex relationship between inputs and a robot's behavioral output requires greater abstraction and induces humans to attribute mental states to the robot in contrast to a simple relationship scenario.
人工智能和机器人领域的研究人员长期以来一直在争论机器人是否能够拥有思想。我们假设心智是代理人输入输出关系的抽象内部表征,是通过进化而获得的,在非零和游戏环境中与他人互动。基于他人的复杂行为,将心理状态归因于他人,使一个主体能够理解另一个主体当前的行为,并预测其未来的行为。因此,行为复杂性,即复杂的感觉输入和运动输出,可能是将抽象心理状态归因于他人的重要线索。为了验证这一理论,我们进行了实验,在实验中,参与者被要求控制一个机器人,该机器人在其行为中表现出简单或复杂的输入-输出关系,通过按遥控器上的按钮开关来实现目标。然后,在目标导向任务中,当按钮开关和马达输出之间的映射发生突然变化时,我们测量了参与者对机器人的主观印象。结果表明,与简单的关系场景相比,输入和机器人行为输出之间的复杂关系需要更多的抽象,并诱导人类将心理状态归因于机器人。
{"title":"Effects of Behavioral Complexity on Intention Attribution to Robots","authors":"Yuto Imamura, K. Terada, Hideyuki Takahashi","doi":"10.1145/2814940.2814949","DOIUrl":"https://doi.org/10.1145/2814940.2814949","url":null,"abstract":"Researchers in artificial intelligence and robotics have long debated whether robots are capable of possessing minds. We hypothesize that the mind is an abstract internal representation of an agent's input-output relationships, acquired through evolution to interact with others in a non-zero-sum game environment. Attributing mental states to others, based on their complex behaviors, enables an agent to understand another agent's current behavior and predict its future behavior. Therefore, behavioral complexity, i.e., complex sensory input and motor output, might be an essential cue in attributing abstract mental states to others. To test this theory, we conducted experiments in which participants were asked to control a robot that exhibits either simple or complex input-output relationships in its behavior to achieve goals by pushing a button switch on a remote control device. We then measured participants' subjective impressions of the robot after a sudden change in the mapping between the button switch and motor output during the goal-oriented task. The results indicate that the complex relationship between inputs and a robot's behavioral output requires greater abstraction and induces humans to attribute mental states to the robot in contrast to a simple relationship scenario.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115008270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The Effect of An Animated Virtual Character on Mobile Chat Interactions 动画虚拟角色对移动聊天互动的影响
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814957
Sin-Hwa Kang, Andrew W. Feng, A. Leuski, D. Casas, Ari Shapiro
This study explores presentation techniques for a 3D animated chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with no image or a static image of a virtual character as opposed to the animated visage of a virtual human capable of displaying appropriate nonverbal behavior. We further investigate users' responses to the animated character's gaze aversion which displayed the character's act of looking away from users and was presented as a listening behavior. The findings of our study demonstrate that people tend to engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that averts its gaze, compared to an animated virtual human that does not avert its gaze, a static image of a virtual character, or an audio-only interface.
本研究探讨了一个基于3D动画聊天的虚拟人与用户进行互动交流的演示技术。与虚拟人的互动是通过实验室外的智能手机在自然环境中进行的。我们的工作比较了与没有图像或虚拟角色的静态图像交互的用户的反应,而不是能够显示适当非语言行为的虚拟人的动画图像。我们进一步调查了用户对动画角色的凝视厌恶的反应,这显示了角色不看用户的行为,并被呈现为一种倾听行为。我们的研究结果表明,与不回避目光的动画虚拟人、虚拟角色的静态图像或只有音频的界面相比,当人们与回避目光的3D动画虚拟人互动时,他们往往会花更长的时间进行更多的交谈。
{"title":"The Effect of An Animated Virtual Character on Mobile Chat Interactions","authors":"Sin-Hwa Kang, Andrew W. Feng, A. Leuski, D. Casas, Ari Shapiro","doi":"10.1145/2814940.2814957","DOIUrl":"https://doi.org/10.1145/2814940.2814957","url":null,"abstract":"This study explores presentation techniques for a 3D animated chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with no image or a static image of a virtual character as opposed to the animated visage of a virtual human capable of displaying appropriate nonverbal behavior. We further investigate users' responses to the animated character's gaze aversion which displayed the character's act of looking away from users and was presented as a listening behavior. The findings of our study demonstrate that people tend to engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that averts its gaze, compared to an animated virtual human that does not avert its gaze, a static image of a virtual character, or an audio-only interface.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133425093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Transitional Explainer: Instruct Functions in the Real World and Onscreen in Multi-Function Printer 过渡解说器:多功能打印机中现实世界和屏幕上的指导功能
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814945
Hirotaka Osawa, Wataru Kayano, T. Miura, Wataru Endo
An office appliance, such as a multi-function printer (MFP) that combines a printer, copy machine, scanner, and facsimile, requires users to simultaneously learn both the manipulation of real-world objects and their abstract representation in a virtual world. Although many MFPs are provided in most offices and stores, their services are often not fully utilized because users deem their features too difficult to understand. We therefore propose a 'transitional explainer,' an agent that instructs users about the features of MFPs by mixing real- and virtual-world representations. Blended reality has been proposed as part of augmented reality. It involves the blending of virtual and real expressions to leverage their combined advantages. In this study, we utilize the advantages of blended reality to show users how to operate complex appliances by extending anthropomorphized explanations. A self-explanatory style is used by the appliance itself to improve the user's ability to remember features and enhance the motivation of all users, especially older ones, to learn the functions. With this system, users interact with the transitional agent and thereby learn how to use the MFP. The agent hides its real eyes and arms in onscreen mode, and extends them in real-world mode. We implemented the transitional explainer for realizing the blended reality agent in the MFP. In addition, we evaluated how this transitional expression supports users' understanding of how to manipulate the MFP and enhances users' motivation to use it.
办公设备,如多功能打印机(MFP),它结合了打印机、复印机、扫描仪和传真机,要求用户同时学习对现实世界对象的操作和它们在虚拟世界中的抽象表示。虽然大多数办公室和商店都提供了许多mfp,但由于用户认为它们的功能难以理解,它们的服务往往没有得到充分利用。因此,我们提出了一个“过渡解释器”,一个通过混合真实世界和虚拟世界的表示来指导用户关于MFPs的特征的代理。混合现实已经被提出作为增强现实的一部分。它包括虚拟和真实表达的混合,以利用它们的综合优势。在本研究中,我们利用混合现实的优势,透过扩展人格化的解释,向使用者展示如何操作复杂的器具。设备本身使用了一种自解释的风格,以提高用户记忆功能的能力,并增强所有用户(特别是年龄较大的用户)学习功能的动机。在这个系统中,用户与过渡代理交互,从而学习如何使用MFP。智能体在屏幕模式下隐藏其真实的眼睛和手臂,并在现实世界模式下扩展它们。为了在MFP中实现混合现实代理,我们实现了过渡解释器。此外,我们评估了这种过渡表达式如何支持用户对如何操作MFP的理解并增强用户使用它的动机。
{"title":"Transitional Explainer: Instruct Functions in the Real World and Onscreen in Multi-Function Printer","authors":"Hirotaka Osawa, Wataru Kayano, T. Miura, Wataru Endo","doi":"10.1145/2814940.2814945","DOIUrl":"https://doi.org/10.1145/2814940.2814945","url":null,"abstract":"An office appliance, such as a multi-function printer (MFP) that combines a printer, copy machine, scanner, and facsimile, requires users to simultaneously learn both the manipulation of real-world objects and their abstract representation in a virtual world. Although many MFPs are provided in most offices and stores, their services are often not fully utilized because users deem their features too difficult to understand. We therefore propose a 'transitional explainer,' an agent that instructs users about the features of MFPs by mixing real- and virtual-world representations. Blended reality has been proposed as part of augmented reality. It involves the blending of virtual and real expressions to leverage their combined advantages. In this study, we utilize the advantages of blended reality to show users how to operate complex appliances by extending anthropomorphized explanations. A self-explanatory style is used by the appliance itself to improve the user's ability to remember features and enhance the motivation of all users, especially older ones, to learn the functions. With this system, users interact with the transitional agent and thereby learn how to use the MFP. The agent hides its real eyes and arms in onscreen mode, and extends them in real-world mode. We implemented the transitional explainer for realizing the blended reality agent in the MFP. In addition, we evaluated how this transitional expression supports users' understanding of how to manipulate the MFP and enhances users' motivation to use it.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122498219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DECoReS: Degree Expressional Command Reproducing System for Autonomous Wheelchairs 自主轮椅的程度表达命令再现系统
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814942
Komei Hasegawa, Seigo Furuya, Yusuke Kanai, M. Imai
In this paper, we propose DECoReS (Degree Expressional Command Reproducing System) that allows a powered wheelchair to travel autonomously through commands that include "degree expressions" depending on particular users and environments. When users control a wheelchair through voice commands, they can sometimes give such orders as "go straight speedily" and "curve to the right widely" to qualify the traveling commands. As these examples illustrate, optional words called "degree expression" are appended to the commands. Because degree expressions are ambiguous, traveling styles described with such expressions are altered depending on the users and environments. DECoReS realizes the travels suited per user by learning degree expressional commands and traveling data from the users. DECoReS also reproduces travels suited for a current environment that a user is about to drive by exacting the data with a map similar to the current environment. Our experiments show that DECoReS can reproduce different travels depending on degree expressional commands, users, and environments.
在本文中,我们提出了DECoReS(度表达命令再现系统),该系统允许电动轮椅根据特定用户和环境自主地执行包含“度表达”的命令。当用户通过语音命令控制轮椅时,他们有时可以发出“快速直行”和“向右大转弯”等命令,以限制旅行命令。如这些示例所示,可选的单词“程度表达式”被附加到命令后。由于程度表达式具有歧义性,因此用这种表达式描述的旅行风格会根据用户和环境而改变。DECoReS通过学习用户的程度表达命令和旅行数据,实现每个用户适合的旅行。DECoReS还通过使用与当前环境相似的地图提取数据,再现适合用户即将驾驶的当前环境的行程。我们的实验表明,解码可以根据程度、表达命令、用户和环境再现不同的旅行。
{"title":"DECoReS: Degree Expressional Command Reproducing System for Autonomous Wheelchairs","authors":"Komei Hasegawa, Seigo Furuya, Yusuke Kanai, M. Imai","doi":"10.1145/2814940.2814942","DOIUrl":"https://doi.org/10.1145/2814940.2814942","url":null,"abstract":"In this paper, we propose DECoReS (Degree Expressional Command Reproducing System) that allows a powered wheelchair to travel autonomously through commands that include \"degree expressions\" depending on particular users and environments. When users control a wheelchair through voice commands, they can sometimes give such orders as \"go straight speedily\" and \"curve to the right widely\" to qualify the traveling commands. As these examples illustrate, optional words called \"degree expression\" are appended to the commands. Because degree expressions are ambiguous, traveling styles described with such expressions are altered depending on the users and environments. DECoReS realizes the travels suited per user by learning degree expressional commands and traveling data from the users. DECoReS also reproduces travels suited for a current environment that a user is about to drive by exacting the data with a map similar to the current environment. Our experiments show that DECoReS can reproduce different travels depending on degree expressional commands, users, and environments.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Proceedings of the 3rd International Conference on Human-Agent Interaction 第三届人机交互国际会议论文集
Minho Lee, T. Omori, Hirotaka Osawa, Hyeyoung Park, J. Young
It is our great pleasure to welcome you to The Third International Conference on Human-Agent Interaction (HAI 2015) in Korea. HAI has grown beyond our expectations in last two years and this year's HAI continues its tradition of being the relevant forum for presentation of research results and experience reports on leading edge issues of Human-Agent Interaction. HAI gathers researchers from fields spanning engineering, computer science, psychology, sociology and cognitive science while covering diverse topics including Human-Virtual/Physical agent interaction, communication and interaction with smart home/smart cars, and modeling of those interactions. The mission of the conference is to share novel quantitative and qualitative research on human and artificial agent interaction and identify new directions for future research and development. HAI gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of human-agent interaction. This year HAI has three exciting keynote talks by the world leaders in areas related to human-agent interaction. We encourage participants to attend them. These valuable and insightful talks can and will guide us to a better understanding of various research issues and methods in the field: Designing the Robotic User Experience: Behavior and Appearance by Guy Hoffman (Assistant-Professor, IDC Herzlyia) Understanding Human Internal States: I Know What You Are and What You Think by Soo-Young Lee (Professor, KAIST) The Evolutionary Origins of Human Cognitive Development: Insights from Research on Chimpanzees by Tetsuro Matsuzawa (Professor, Kyoto University) We also have 23 oral presentations, 51 poster presentations and 2 workshops. All of them present latest research results and ideas. Discussion between researchers from all over the world will be exciting.
我们非常高兴欢迎您参加在韩国举行的第三届人机交互国际会议(HAI 2015)。在过去的两年里,人机交互的发展超出了我们的预期,今年的人机交互延续了它作为展示人机交互前沿问题的研究成果和经验报告的相关论坛的传统。HAI汇集了来自工程,计算机科学,心理学,社会学和认知科学领域的研究人员,同时涵盖了不同的主题,包括人类-虚拟/物理代理交互,与智能家居/智能汽车的通信和交互,以及这些交互的建模。会议的使命是分享人类和人工智能体相互作用的定量和定性新研究,并确定未来研究和发展的新方向。HAI为研究人员和从业者提供了一个独特的机会,可以与对人-代理交互的各个方面感兴趣的其他人分享他们的观点。今年HAI有三个令人兴奋的主题演讲,由世界领导人在与人类代理交互相关的领域发表。我们鼓励参与者参加这些活动。这些有价值和深刻见解的演讲可以并将引导我们更好地理解该领域的各种研究问题和方法:Guy Hoffman (IDC Herzlyia助理教授)的设计机器人用户体验:行为和外观理解人类内部状态:我知道你是什么和你想什么Lee Soo-Young (KAIST教授)人类认知发展的进化起源:我们还有23个口头报告,51个海报报告和2个研讨会。他们都展示了最新的研究成果和思想。来自世界各地的研究人员之间的讨论将是令人兴奋的。
{"title":"Proceedings of the 3rd International Conference on Human-Agent Interaction","authors":"Minho Lee, T. Omori, Hirotaka Osawa, Hyeyoung Park, J. Young","doi":"10.1145/2814940","DOIUrl":"https://doi.org/10.1145/2814940","url":null,"abstract":"It is our great pleasure to welcome you to The Third International Conference on Human-Agent Interaction (HAI 2015) in Korea. HAI has grown beyond our expectations in last two years and this year's HAI continues its tradition of being the relevant forum for presentation of research results and experience reports on leading edge issues of Human-Agent Interaction. HAI gathers researchers from fields spanning engineering, computer science, psychology, sociology and cognitive science while covering diverse topics including Human-Virtual/Physical agent interaction, communication and interaction with smart home/smart cars, and modeling of those interactions. \u0000 \u0000The mission of the conference is to share novel quantitative and qualitative research on human and artificial agent interaction and identify new directions for future research and development. HAI gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of human-agent interaction. \u0000 \u0000This year HAI has three exciting keynote talks by the world leaders in areas related to human-agent interaction. We encourage participants to attend them. These valuable and insightful talks can and will guide us to a better understanding of various research issues and methods in the field: \u0000Designing the Robotic User Experience: Behavior and Appearance by Guy Hoffman (Assistant-Professor, IDC Herzlyia) \u0000Understanding Human Internal States: I Know What You Are and What You Think by Soo-Young Lee (Professor, KAIST) \u0000The Evolutionary Origins of Human Cognitive Development: Insights from Research on Chimpanzees by Tetsuro Matsuzawa (Professor, Kyoto University) \u0000 \u0000 \u0000 \u0000We also have 23 oral presentations, 51 poster presentations and 2 workshops. All of them present latest research results and ideas. Discussion between researchers from all over the world will be exciting.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123088674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Model of Agency Identification through Subconscious Embodied Interaction 基于潜意识具身互动的代理认同模型
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2814950
Takafumi Sakamoto, Yugo Takeuchi
Humans can communicate because they adapt and adjust their behavior to each other. Developing a relationship with an unknown artifact, on the other hand, is difficult. To address this problem, some robots utilize the context of the interaction between humans. However, there has been little investigation on interaction when no information about the interaction partner has been provided and where there has been no experimental task. Clarification of how people perceive unknown objects as agents is required. We believe that a stage of subconscious interaction plays a role in this process. We created an experimental environment to observe the interaction between a human and a robot whose behavior was actually mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. The results of our experiment suggest that the timing of movement was used as the cue for interaction development. We need to verify the effects of other interaction patterns and inspect what kind of action and reaction are regarded as signals that enhance interpersonal interaction.
人类能够交流是因为他们能够适应和调整自己的行为以适应彼此。另一方面,与一件未知的神器发展关系是很困难的。为了解决这个问题,一些机器人利用了人类之间互动的环境。然而,在没有提供关于互动伙伴的信息和没有实验任务的情况下,对互动的调查很少。需要澄清人们如何将未知物体视为代理。我们认为潜意识的互动阶段在这个过程中起着作用。我们创造了一个实验环境来观察人类和机器人之间的互动,而机器人的行为实际上是由另一个人绘制的。参与者被要求用语言表达他们在与机器人互动时的想法或感受。我们的实验结果表明,动作的时机被用作互动发展的线索。我们需要验证其他互动模式的效果,并检查什么样的行动和反应被视为增强人际互动的信号。
{"title":"Model of Agency Identification through Subconscious Embodied Interaction","authors":"Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1145/2814940.2814950","DOIUrl":"https://doi.org/10.1145/2814940.2814950","url":null,"abstract":"Humans can communicate because they adapt and adjust their behavior to each other. Developing a relationship with an unknown artifact, on the other hand, is difficult. To address this problem, some robots utilize the context of the interaction between humans. However, there has been little investigation on interaction when no information about the interaction partner has been provided and where there has been no experimental task. Clarification of how people perceive unknown objects as agents is required. We believe that a stage of subconscious interaction plays a role in this process. We created an experimental environment to observe the interaction between a human and a robot whose behavior was actually mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. The results of our experiment suggest that the timing of movement was used as the cue for interaction development. We need to verify the effects of other interaction patterns and inspect what kind of action and reaction are regarded as signals that enhance interpersonal interaction.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115439932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Anthropomorphic Approach to Presenting Information on Demand Response Reflecting Household's Environmental Moral 反映家庭环境道德的需求响应信息拟人化呈现方法
Pub Date : 2015-10-21 DOI: 10.1145/2814940.2815012
T. Nakayama, Hirotaka Osawa, S. Okushima
A demand response method is expected to reduce carbon dioxide emissions and make stable power supply. A demand control depends on electric power suppliers' intention in many cases and therefore users are forced to be inconvenient in their way of life. We now need to develop an innovative method which encourages users to participate in a demand response and electric power saving action without their patience. Usually, the electric power saving behaviors are motivated by pecuniary incentives or environmental concern such as carbon dioxide emissions reduction and power failure probability. On such background, this paper evaluates the effectiveness of the attempt to encourage people's engagement in electric power saving action with the method of presenting information such as numeral, visualized, and anthropomorphized information regarding pecuniary incentives and environmental concern. The result shows that the method of presenting anthropomorphized information strongly induced users' electric power saving behaviors than other methods.
需求响应方法有望减少二氧化碳排放,实现稳定的电力供应。在许多情况下,需求控制取决于电力供应商的意图,因此用户被迫在他们的生活方式上不方便。我们现在需要开发一种创新的方法,鼓励用户在没有耐心的情况下参与需求响应和节电行动。通常,节电行为的动机是金钱激励或环境问题,如减少二氧化碳排放和停电概率。在这样的背景下,本文通过金钱激励和环境关注等数字化、可视化、人格化信息的呈现方式,评估了鼓励人们参与节能行动的有效性。结果表明,人格化信息呈现方式较其他方式更能诱导用户的节电行为。
{"title":"An Anthropomorphic Approach to Presenting Information on Demand Response Reflecting Household's Environmental Moral","authors":"T. Nakayama, Hirotaka Osawa, S. Okushima","doi":"10.1145/2814940.2815012","DOIUrl":"https://doi.org/10.1145/2814940.2815012","url":null,"abstract":"A demand response method is expected to reduce carbon dioxide emissions and make stable power supply. A demand control depends on electric power suppliers' intention in many cases and therefore users are forced to be inconvenient in their way of life. We now need to develop an innovative method which encourages users to participate in a demand response and electric power saving action without their patience. Usually, the electric power saving behaviors are motivated by pecuniary incentives or environmental concern such as carbon dioxide emissions reduction and power failure probability. On such background, this paper evaluates the effectiveness of the attempt to encourage people's engagement in electric power saving action with the method of presenting information such as numeral, visualized, and anthropomorphized information regarding pecuniary incentives and environmental concern. The result shows that the method of presenting anthropomorphized information strongly induced users' electric power saving behaviors than other methods.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128819099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 3rd International Conference on Human-Agent Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1