首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Healthcare robot systems for a hospital environment: CareBot and ReceptionBot 用于医院环境的医疗保健机器人系统:CareBot和receitionbot
H. Ahn, Min Ho Lee, B. MacDonald
This paper presents a robot system for healthcare facility environments. Current healthcare robot systems do not address healthcare workflows well and our goal is to provide distributed, heterogeneous multiple robot systems that are capable of integrating with healthcare workflows and are easy to modify when workflow requirements change. The proposed system consists of three subsystems: a receptionist robot system, a nurse assistant robot system, and a medical server. The roles of the receptionist robot and the nurse assistant robot are to do tasks to help the human receptionist and nurse. The healthcare robots upload and download patient information through the medical server and provide data summaries to human care givers via a web interface. We developed these healthcare robot systems based on our new robotic software framework, which is designed to easily integrate different programming frameworks, and minimize the impact of framework differences and new versions. We test the functionalities of each healthcare robot system, evaluate the robot-robot collaboration, and present a case study.
本文提出了一种用于医疗设施环境的机器人系统。当前的医疗保健机器人系统不能很好地处理医疗保健工作流程,我们的目标是提供分布式、异构的多机器人系统,这些系统能够与医疗保健工作流程集成,并且在工作流程需求发生变化时易于修改。该系统由三个子系统组成:接待员机器人系统、护士助理机器人系统和医疗服务器。接待员机器人和护士助理机器人的角色是帮助人类接待员和护士完成任务。医疗机器人通过医疗服务器上传和下载患者信息,并通过web界面向人类护理人员提供数据摘要。我们基于新的机器人软件框架开发了这些医疗保健机器人系统,该框架旨在轻松集成不同的编程框架,并最大限度地减少框架差异和新版本的影响。我们测试了每个医疗保健机器人系统的功能,评估了机器人-机器人协作,并提出了一个案例研究。
{"title":"Healthcare robot systems for a hospital environment: CareBot and ReceptionBot","authors":"H. Ahn, Min Ho Lee, B. MacDonald","doi":"10.1109/ROMAN.2015.7333621","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333621","url":null,"abstract":"This paper presents a robot system for healthcare facility environments. Current healthcare robot systems do not address healthcare workflows well and our goal is to provide distributed, heterogeneous multiple robot systems that are capable of integrating with healthcare workflows and are easy to modify when workflow requirements change. The proposed system consists of three subsystems: a receptionist robot system, a nurse assistant robot system, and a medical server. The roles of the receptionist robot and the nurse assistant robot are to do tasks to help the human receptionist and nurse. The healthcare robots upload and download patient information through the medical server and provide data summaries to human care givers via a web interface. We developed these healthcare robot systems based on our new robotic software framework, which is designed to easily integrate different programming frameworks, and minimize the impact of framework differences and new versions. We test the functionalities of each healthcare robot system, evaluate the robot-robot collaboration, and present a case study.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125445653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Driving situation-based real-time interaction with intelligent driving assistance agent 基于驾驶情境与智能驾驶辅助agent实时交互
Young-Hoon Nho, Ju-Hwan Seo, Jeong-Yean Yang, D. Kwon
Driving assistance systems (DASs) can be useful to inexperienced drivers. Current DASs are composed of front rear monitoring systems (FRMSs), lane departure warning systems (LDWSs), side obstacle warning systems (SOWSs), etc. Sometimes, DASs provide unnecessary information when using unprocessed low-level data. Therefore, to provide high-level necessary information to the driver, DASs need to be improved. In this paper, we present an intelligent driving assistance robotic agent for safe driving. We recognize seven driving situations, namely, speed bump, corner, crowded area, uphill, downhill, straight, and parking space, using hidden Markov models (HMMs) based on velocity, accelerator pedal, and steering wheel. The seven situations and global positioning system information are used to generate a situation information map. The developers of a navigation system have to tag driving events by themselves. In contrast, our driving assistance agent tags situation information automatically as the vehicle is driven. The robotic agent uses the driving situation and status information to assist safe driving with motions and facial and verbal expressions.
驾驶辅助系统(das)对没有经验的司机很有用。当前的自动驾驶系统主要由前后监控系统(frms)、车道偏离预警系统(LDWSs)、侧障预警系统(SOWSs)等组成。有时,在使用未处理的低级数据时,das会提供不必要的信息。因此,为了向驱动程序提供必要的高级信息,需要改进das。本文提出了一种用于安全驾驶的智能驾驶辅助机器人代理。我们使用基于速度、油门踏板和方向盘的隐马尔可夫模型(hmm)识别7种驾驶情况,即减速带、拐角、拥挤区域、上坡、下坡、直道和停车位。利用这七种情况和全球定位系统的信息生成一幅情况信息图。导航系统的开发人员必须自己标记驾驶事件。相比之下,我们的驾驶辅助代理在车辆行驶时自动标记情况信息。机器人代理利用驾驶情况和状态信息,通过动作、面部和语言表达来辅助安全驾驶。
{"title":"Driving situation-based real-time interaction with intelligent driving assistance agent","authors":"Young-Hoon Nho, Ju-Hwan Seo, Jeong-Yean Yang, D. Kwon","doi":"10.1109/ROMAN.2015.7333592","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333592","url":null,"abstract":"Driving assistance systems (DASs) can be useful to inexperienced drivers. Current DASs are composed of front rear monitoring systems (FRMSs), lane departure warning systems (LDWSs), side obstacle warning systems (SOWSs), etc. Sometimes, DASs provide unnecessary information when using unprocessed low-level data. Therefore, to provide high-level necessary information to the driver, DASs need to be improved. In this paper, we present an intelligent driving assistance robotic agent for safe driving. We recognize seven driving situations, namely, speed bump, corner, crowded area, uphill, downhill, straight, and parking space, using hidden Markov models (HMMs) based on velocity, accelerator pedal, and steering wheel. The seven situations and global positioning system information are used to generate a situation information map. The developers of a navigation system have to tag driving events by themselves. In contrast, our driving assistance agent tags situation information automatically as the vehicle is driven. The robotic agent uses the driving situation and status information to assist safe driving with motions and facial and verbal expressions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125984208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Paired robotic devices to mediate and represent social behaviors 配对机器人设备调解和代表社会行为
Eleuda Nuñez, Soichiro Matsuda, Masakazu Hirokawa, Jun-ichi Yamamoto, Kenji Suzuki
Among treatments for children with ASD, assistive robots are growing popular as they are able to elicit different social behaviors. At the same time, technology is providing methods to automatically collect quantitative data, in order to assist the therapist evaluating the children progress. In this study we introduce a system composed of multiple spherical devices, as well as the design of a turn taking activity using those devices. In previous studies turn taking was found to be an important social skill for development and engaging in activities with others. To evaluate the system performance, a single case experiment with a boy with ASD and the developed device was done. During the activity, the device had two different roles: to engage the child on the turn taking activity and to provide information to the therapist that describe the child's behavior. We could successfully collect quantitative data that represent turn taking as well as we could observe how does the boy manipulate and interact with the device. Based on the results, we are motivated to keep exploring the potential application of devices mediated activities for children with ASD.
在自闭症儿童的治疗中,辅助机器人越来越受欢迎,因为它们能够引发不同的社会行为。与此同时,技术提供了自动收集定量数据的方法,以帮助治疗师评估儿童的进展。在本研究中,我们介绍了一个由多个球形装置组成的系统,以及利用这些装置进行转弯活动的设计。在之前的研究中,轮流被发现是一项重要的社交技能,有助于发展和参与与他人的活动。为了评估该系统的性能,我们对一名患有ASD的男孩和所开发的装置进行了单例实验。在活动期间,设备有两个不同的作用:让孩子参与到活动中来,并向治疗师提供描述孩子行为的信息。我们可以成功地收集代表轮换的定量数据,我们也可以观察到男孩是如何操作和与设备互动的。基于这些结果,我们有动力继续探索设备介导的活动在ASD儿童中的潜在应用。
{"title":"Paired robotic devices to mediate and represent social behaviors","authors":"Eleuda Nuñez, Soichiro Matsuda, Masakazu Hirokawa, Jun-ichi Yamamoto, Kenji Suzuki","doi":"10.1109/ROMAN.2015.7333669","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333669","url":null,"abstract":"Among treatments for children with ASD, assistive robots are growing popular as they are able to elicit different social behaviors. At the same time, technology is providing methods to automatically collect quantitative data, in order to assist the therapist evaluating the children progress. In this study we introduce a system composed of multiple spherical devices, as well as the design of a turn taking activity using those devices. In previous studies turn taking was found to be an important social skill for development and engaging in activities with others. To evaluate the system performance, a single case experiment with a boy with ASD and the developed device was done. During the activity, the device had two different roles: to engage the child on the turn taking activity and to provide information to the therapist that describe the child's behavior. We could successfully collect quantitative data that represent turn taking as well as we could observe how does the boy manipulate and interact with the device. Based on the results, we are motivated to keep exploring the potential application of devices mediated activities for children with ASD.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122813557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Comparing two gesture design methods for a humanoid robot: Human motion mapping by an RGB-D sensor and hand-puppeteering 比较两种人形机器人的手势设计方法:基于RGB-D传感器的人体运动映射和手偶操纵
Minhua Zheng, Jiaole Wang, M. Meng
In this paper, two gesture design methods for the humanoid robot NAO are proposed and compared. The first method is mapping human motions to the robot by an RGB-D sensor and kinematic modeling. The second method is based on hand-puppeteering. Thirteen subjects are recruited to design a forearm waving gesture for a NAO robot by the two methods. The obtained two groups of forearm waving gestures are then compared by another sixteen subjects. Our experimental results indicate that the forearm waving gestures obtained from the hand-puppeteering method are slower and have smaller range of motion than those obtained from the motion mapping method. Besides, people tend to perceive the forearm waving gestures obtained from the hand-puppeteering method as more likeable and as conveying the greeting message better than those obtained from the motion mapping method. This work contributes to a better understanding of the nature of the two gesture design methods and offers instructive reference for robot behavior designers on design method choosing.
本文提出并比较了仿人机器人NAO的两种手势设计方法。第一种方法是通过RGB-D传感器和运动学建模将人类运动映射到机器人上。第二种方法是基于手操木偶。招募了13名被试,采用两种方法设计了NAO机器人的前臂摆动手势。得到的两组前臂摆动手势随后被另外16名受试者进行比较。实验结果表明,与运动映射方法相比,手木偶操纵方法获得的前臂摆动手势速度更慢,运动范围更小。此外,人们倾向于认为手木偶法获得的前臂摆动手势比运动映射法获得的手势更讨人喜欢,更能传达问候信息。这项工作有助于更好地理解两种手势设计方法的本质,并为机器人行为设计者在设计方法的选择上提供指导性的参考。
{"title":"Comparing two gesture design methods for a humanoid robot: Human motion mapping by an RGB-D sensor and hand-puppeteering","authors":"Minhua Zheng, Jiaole Wang, M. Meng","doi":"10.1109/ROMAN.2015.7333639","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333639","url":null,"abstract":"In this paper, two gesture design methods for the humanoid robot NAO are proposed and compared. The first method is mapping human motions to the robot by an RGB-D sensor and kinematic modeling. The second method is based on hand-puppeteering. Thirteen subjects are recruited to design a forearm waving gesture for a NAO robot by the two methods. The obtained two groups of forearm waving gestures are then compared by another sixteen subjects. Our experimental results indicate that the forearm waving gestures obtained from the hand-puppeteering method are slower and have smaller range of motion than those obtained from the motion mapping method. Besides, people tend to perceive the forearm waving gestures obtained from the hand-puppeteering method as more likeable and as conveying the greeting message better than those obtained from the motion mapping method. This work contributes to a better understanding of the nature of the two gesture design methods and offers instructive reference for robot behavior designers on design method choosing.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123551777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A content validated tool to observe autism behavior in child-robot interaction 儿童-机器人互动中自闭症行为的内容验证工具
S. Shamsuddin, H. Yussof, F. A. Hanapiah, Salina Mohamed
This research presents the validation study of a qualitative tool to analyze the response in robot-based intervention. The 24 behavioral items in the tool were determined through routine observations carried out by clinicians and the definitions of autism adopted by the Diagnostic and Statistical Manual of Mental Disorders: Fourth Edition-Text Revision (DSM-IV-TR). 34 experts determined content validity and tool reliability by viewpoints through the Likert scale. The tool was found to have good content validity with more than 67% of experts scored at least 3 on the 5-point Likert scale. Cronbach's alpha coefficient of 0.872 reflected the tool's content reliability and internal consistency. The tool was used to analyze the behavior response of children with autism when exposed to a humanoid robot. It functioned as a score-sheet to compare the behavior of autistic children with and without the presence of a robot. These findings put forward a tool with contents considered valid to evaluate behavior outcome of studies involving children with autism and robots.
本研究提出了一种定性工具的验证研究,以分析机器人干预的反应。工具中的24个行为项目是通过临床医生的常规观察和《精神障碍诊断与统计手册:第四版-文本修订版》(DSM-IV-TR)对自闭症的定义来确定的。34位专家通过李克特量表确定观点的内容效度和工具信度。该工具被发现具有良好的内容效度,超过67%的专家在5分李克特量表上得分至少为3分。Cronbach’s alpha系数为0.872,反映了工具的内容信度和内部一致性。该工具用于分析自闭症儿童在接触人形机器人时的行为反应。它的功能就像一张记分表,用来比较有机器人在场和没有机器人在场时自闭症儿童的行为。这些发现提出了一个内容有效的工具来评估自闭症儿童和机器人研究的行为结果。
{"title":"A content validated tool to observe autism behavior in child-robot interaction","authors":"S. Shamsuddin, H. Yussof, F. A. Hanapiah, Salina Mohamed","doi":"10.1109/ROMAN.2015.7333578","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333578","url":null,"abstract":"This research presents the validation study of a qualitative tool to analyze the response in robot-based intervention. The 24 behavioral items in the tool were determined through routine observations carried out by clinicians and the definitions of autism adopted by the Diagnostic and Statistical Manual of Mental Disorders: Fourth Edition-Text Revision (DSM-IV-TR). 34 experts determined content validity and tool reliability by viewpoints through the Likert scale. The tool was found to have good content validity with more than 67% of experts scored at least 3 on the 5-point Likert scale. Cronbach's alpha coefficient of 0.872 reflected the tool's content reliability and internal consistency. The tool was used to analyze the behavior response of children with autism when exposed to a humanoid robot. It functioned as a score-sheet to compare the behavior of autistic children with and without the presence of a robot. These findings put forward a tool with contents considered valid to evaluate behavior outcome of studies involving children with autism and robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116495620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Model of strategic behavior for interaction that guide others internal state 战略行为模式的互动,指导他人的内部状态
T. Omori, T. Shimotomai, Kasumi Abe, T. Nagai
Though communication is one of our basic activity, it is not always that we can interact effectively. It is well known that a key point for a successful interaction is the inclusion of other with a good mood. It means acquisition of other's interest is a precondition for a successful communication.
虽然沟通是我们的基本活动之一,但我们并不总是能够有效地互动。众所周知,一个成功的互动的关键是包容他人的好心情。这意味着获取他人的兴趣是成功沟通的先决条件。
{"title":"Model of strategic behavior for interaction that guide others internal state","authors":"T. Omori, T. Shimotomai, Kasumi Abe, T. Nagai","doi":"10.1109/ROMAN.2015.7333593","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333593","url":null,"abstract":"Though communication is one of our basic activity, it is not always that we can interact effectively. It is well known that a key point for a successful interaction is the inclusion of other with a good mood. It means acquisition of other's interest is a precondition for a successful communication.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132958578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Crossmodal combination among verbal, facial, and flexion expression for anthropomorphic acceptability 拟人可接受性的言语、面部和屈曲表达的跨模态组合
Tomoko Yonezawa, Naoto Yoshida, Jumpei Nishinaka
This paper proposes an effective communication with an agent's appearance and reality of the acceptance level of the user's order using verbal and nonverbal expressions. The crossmodal combination is expected to enable delicate expressions of the agent's internal state especially when the agent's decision of the acceptance is different from the agent's mind, so the user becomes aware of the difficult situations. We have proposed to adopt the facial expression and the flexion of the word ending as the parameters of the agent's internal state. The expressions are attached to the scripts of each acceptance level. The results of the expressions showed the following: 1) the effectiveness of both the facial and flexion expressions, and 2) the crossmodal combinations that generate the agent's concealed but real feeling.
本文提出了一种利用言语和非言语表达方式与代理对用户订单接受程度的外观和现实进行有效沟通的方法。这种跨模式的组合可以很好地表达agent的内部状态,特别是当agent的接受决定与agent的想法不同时,用户会意识到这种困难的情况。我们建议采用面部表情和词尾的屈曲作为agent内部状态的参数。表达式附在每个接受级别的脚本上。表情的结果表明:1)面部表情和屈曲表情的有效性,2)产生代理人隐藏但真实的感觉的跨模态组合。
{"title":"Crossmodal combination among verbal, facial, and flexion expression for anthropomorphic acceptability","authors":"Tomoko Yonezawa, Naoto Yoshida, Jumpei Nishinaka","doi":"10.1109/ROMAN.2015.7333623","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333623","url":null,"abstract":"This paper proposes an effective communication with an agent's appearance and reality of the acceptance level of the user's order using verbal and nonverbal expressions. The crossmodal combination is expected to enable delicate expressions of the agent's internal state especially when the agent's decision of the acceptance is different from the agent's mind, so the user becomes aware of the difficult situations. We have proposed to adopt the facial expression and the flexion of the word ending as the parameters of the agent's internal state. The expressions are attached to the scripts of each acceptance level. The results of the expressions showed the following: 1) the effectiveness of both the facial and flexion expressions, and 2) the crossmodal combinations that generate the agent's concealed but real feeling.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129814253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Goal recognition using temporal emphasis 使用时间重点的目标识别
Konstantinos Theofilis, Chrystopher L. Nehaniv, K. Dautenhahn
The question of what to imitate is pivotal for imitation learning in robotics. When the robot's tutor is a naive user, it is very difficult for the embodied agent to account for the unpredictability of the tutor's behaviour. Preliminary results from a previous study suggested that the phenomenon of temporal emphasis, i.e., that tutors tend to keep the goal state of the demonstrated task stationary longer than the sub-states, can be used to recognise that task. In the present paper, the previous study is expanded and the existence of the phenomenon is investigated further. An improved experimental setup, using the iCub humanoid robot and naive users, was implemented. Analysis of the data showed that the phenomenon was detected in the majority of the cases, with a strongly significant result. In the few cases that the end state was not the one with the longest time span, it was a borderline second. Then, a very simple algorithm using a single binary criterion was used to show that the phenomenon exists and can be detected easily. That leads to the argument that humans may also be able to detect this phenomenon and use it for recognizing, as learners or emphasizing and teaching as tutors, the end goal, at least for tasks with clear and separate sub-goal sequences. A robot that implements this behavior could be able to perform better both as a tutor and as a learner when interacting with naive users.
模仿什么是机器人模仿学习的关键问题。当机器人的导师是一个幼稚的用户时,嵌入代理很难解释导师行为的不可预测性。先前的一项研究的初步结果表明,时间强调的现象,即导师倾向于保持演示任务的目标状态比子状态稳定的时间更长,可以用来识别任务。本文对前人的研究进行了拓展,进一步探讨了该现象的存在性。采用iCub人形机器人和天真的用户,实现了改进的实验设置。对数据的分析表明,在大多数情况下都发现了这种现象,结果非常显著。在少数情况下,最终状态不是具有最长时间跨度的状态,它是一个边缘秒。然后,使用一个非常简单的算法,使用一个单一的二值准则来证明这种现象的存在,并且可以很容易地检测到。这导致了一种观点,即人类可能也能够检测到这种现象,并利用它来识别最终目标,作为学习者,或者作为导师,强调和教学最终目标,至少对于具有清晰而独立的子目标序列的任务。实现这种行为的机器人在与幼稚的用户交互时,无论是作为导师还是作为学习者,都能表现得更好。
{"title":"Goal recognition using temporal emphasis","authors":"Konstantinos Theofilis, Chrystopher L. Nehaniv, K. Dautenhahn","doi":"10.1109/ROMAN.2015.7333650","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333650","url":null,"abstract":"The question of what to imitate is pivotal for imitation learning in robotics. When the robot's tutor is a naive user, it is very difficult for the embodied agent to account for the unpredictability of the tutor's behaviour. Preliminary results from a previous study suggested that the phenomenon of temporal emphasis, i.e., that tutors tend to keep the goal state of the demonstrated task stationary longer than the sub-states, can be used to recognise that task. In the present paper, the previous study is expanded and the existence of the phenomenon is investigated further. An improved experimental setup, using the iCub humanoid robot and naive users, was implemented. Analysis of the data showed that the phenomenon was detected in the majority of the cases, with a strongly significant result. In the few cases that the end state was not the one with the longest time span, it was a borderline second. Then, a very simple algorithm using a single binary criterion was used to show that the phenomenon exists and can be detected easily. That leads to the argument that humans may also be able to detect this phenomenon and use it for recognizing, as learners or emphasizing and teaching as tutors, the end goal, at least for tasks with clear and separate sub-goal sequences. A robot that implements this behavior could be able to perform better both as a tutor and as a learner when interacting with naive users.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134476077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An architecture for emotional and context-aware associative learning for robot companions 面向机器人同伴的情感和情境感知联想学习体系结构
Caroline Rizzi Raymundo, Colin G. Johnson, P. A. Vargas
This work proposes a theoretical architectural model based on the brain's fear learning system with the purpose of generating artificial fear conditioning at both stimuli and context abstraction levels in robot companions. The proposed architecture is inspired by the different brain regions involved in fear learning, here divided into four modules that work in an integrated and parallel manner: the sensory system, the amygdala system, the hippocampal system and the working memory. Each of these modules is based on a different approach and performs a different task in the process of learning and memorizing environmental cues to predict the occurrence of unpleasant situations. The main contribution of the model proposed here is the integration of fear learning and context awareness in order to fuse emotional and contextual artificial memories. The purpose is to provide robots with more believable social responses, leading to more natural interactions between humans and robots.
这项工作提出了一个基于大脑恐惧学习系统的理论架构模型,目的是在机器人同伴的刺激和情境抽象层面上产生人工恐惧条件反射。该架构的灵感来自于参与恐惧学习的不同大脑区域,这里分为四个模块,以整合和并行的方式工作:感觉系统、杏仁核系统、海马体系统和工作记忆。每个模块都基于不同的方法,在学习和记忆环境线索的过程中执行不同的任务,以预测不愉快情况的发生。这里提出的模型的主要贡献是恐惧学习和情境意识的整合,以融合情感和情境人工记忆。目的是为机器人提供更可信的社会反应,使人与机器人之间的互动更加自然。
{"title":"An architecture for emotional and context-aware associative learning for robot companions","authors":"Caroline Rizzi Raymundo, Colin G. Johnson, P. A. Vargas","doi":"10.1109/ROMAN.2015.7333699","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333699","url":null,"abstract":"This work proposes a theoretical architectural model based on the brain's fear learning system with the purpose of generating artificial fear conditioning at both stimuli and context abstraction levels in robot companions. The proposed architecture is inspired by the different brain regions involved in fear learning, here divided into four modules that work in an integrated and parallel manner: the sensory system, the amygdala system, the hippocampal system and the working memory. Each of these modules is based on a different approach and performs a different task in the process of learning and memorizing environmental cues to predict the occurrence of unpleasant situations. The main contribution of the model proposed here is the integration of fear learning and context awareness in order to fuse emotional and contextual artificial memories. The purpose is to provide robots with more believable social responses, leading to more natural interactions between humans and robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"333 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122139287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Towards enhancing human experience by affective robots: Experiment and discussion 情感机器人提升人类体验:实验与讨论
Takahiro Matsumoto, Shunichi Seko, Ryosuke Aoki, Akihiro Miyata, Tomoki Watanabe, Tomohiro Yamada
Many studies have addressed the affective robot, a robot that can express emotion, in the field of human-robot interaction. Really useful applications, however, can only be designed if the effect of such expressions on the user are completely elucidated. In this paper, we propose a new useful application scenario for the affective robot that shares the user's experience and describe an experiment in which the user's experience is altered by the presence of the affective robot. As the stimulus, we use movie scenes to evoke 4 types of emotion: excitement, fright, depression, and relaxation. Twenty four participants watch different movies under three conditions: no robot present, with robot that offers appropriate emotional expression, and with robot that has random emotional expression. The results show that the participants watching with the appropriate emotion robot experienced stronger emotion with exciting and relaxing movies and weaker emotion with scary movies than is true without the robot. These changes in the viewer's experience did not occur when watching with the random emotion robot. From the results, we extract design points of affective robot behavior for enhancing user experience. This research is novel in terms of examining the impact of robot emotion, seen as appropriate by the viewer, on the viewer's experience.
情感机器人是一种能够表达情感的机器人,在人机交互领域有很多研究。然而,真正有用的应用程序只有在完全阐明这些表达式对用户的影响时才能设计出来。在本文中,我们提出了一个新的有用的情感机器人应用场景,分享用户的体验,并描述了一个实验,其中用户的体验被情感机器人的存在所改变。作为刺激,我们用电影场景来唤起四种情绪:兴奋、恐惧、沮丧和放松。24名参与者在三种条件下观看不同的电影:没有机器人在场,机器人提供适当的情感表达,机器人有随机的情感表达。结果表明,与不带机器人观看相比,带适当情绪机器人观看令人兴奋、放松的电影的参与者情绪更强烈,而带适当情绪机器人观看恐怖电影的参与者情绪更弱。当观看随机情感机器人时,观众体验的这些变化不会发生。从结果中,我们提取了情感机器人行为的设计点,以增强用户体验。这项研究在研究机器人情感的影响方面是新颖的,被观众视为适当的,对观众的体验。
{"title":"Towards enhancing human experience by affective robots: Experiment and discussion","authors":"Takahiro Matsumoto, Shunichi Seko, Ryosuke Aoki, Akihiro Miyata, Tomoki Watanabe, Tomohiro Yamada","doi":"10.1109/ROMAN.2015.7333591","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333591","url":null,"abstract":"Many studies have addressed the affective robot, a robot that can express emotion, in the field of human-robot interaction. Really useful applications, however, can only be designed if the effect of such expressions on the user are completely elucidated. In this paper, we propose a new useful application scenario for the affective robot that shares the user's experience and describe an experiment in which the user's experience is altered by the presence of the affective robot. As the stimulus, we use movie scenes to evoke 4 types of emotion: excitement, fright, depression, and relaxation. Twenty four participants watch different movies under three conditions: no robot present, with robot that offers appropriate emotional expression, and with robot that has random emotional expression. The results show that the participants watching with the appropriate emotion robot experienced stronger emotion with exciting and relaxing movies and weaker emotion with scary movies than is true without the robot. These changes in the viewer's experience did not occur when watching with the random emotion robot. From the results, we extract design points of affective robot behavior for enhancing user experience. This research is novel in terms of examining the impact of robot emotion, seen as appropriate by the viewer, on the viewer's experience.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129077333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1