首页 > 最新文献

2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
The usefulness of P-CUBE as a programming education tool for programming beginners P-CUBE作为编程初学者的编程教育工具的实用性
T. Motoyoshi, Shun Kakehashi, H. Masuta, K. Koyanagi, T. Oshima, H. Kawakami
P-CUBE is a block type programming tool for begginers including visually impaired. This paper describes a study conducted on the usefulness of P-CUBE as an education tool for a programming begginer. We conducted an experiment for comparing P-CUBE and a conventional code type programming soft for a mobile robot. We record the times of referencing the manual of programming and the time required for programming exercise during the experiment. We get subjective assessment from subjects at the end of the experiment. The result showed that P-CUBE is useful as a programming education tool for beginners.
P-CUBE是面向包括视障人士在内的初学者的块式编程工具。本文描述了一项关于P-CUBE作为编程初学者的教育工具的有用性的研究。我们进行了P-CUBE和传统的代码型编程软件在移动机器人上的对比实验。我们记录了实验过程中参考编程手册的次数和编程练习所需的时间。在实验结束时,我们从实验对象那里得到主观评价。结果表明,P-CUBE可以作为初学者的编程教育工具。
{"title":"The usefulness of P-CUBE as a programming education tool for programming beginners","authors":"T. Motoyoshi, Shun Kakehashi, H. Masuta, K. Koyanagi, T. Oshima, H. Kawakami","doi":"10.1109/ROMAN.2015.7333642","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333642","url":null,"abstract":"P-CUBE is a block type programming tool for begginers including visually impaired. This paper describes a study conducted on the usefulness of P-CUBE as an education tool for a programming begginer. We conducted an experiment for comparing P-CUBE and a conventional code type programming soft for a mobile robot. We record the times of referencing the manual of programming and the time required for programming exercise during the experiment. We get subjective assessment from subjects at the end of the experiment. The result showed that P-CUBE is useful as a programming education tool for beginners.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121524916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
How important is body language in mood induction procedures with a humanoid robot? 肢体语言在类人机器人的情绪诱导过程中有多重要?
Cristina Diaz-Montilla, A. P. D. Pobil
The aim of this article is to investigate the effectiveness of the body language of a humanoid robot to induce emotions. It is based on the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs). According to the Velten Method to induce a positive mood, the body language of a humanoid robot is used here as a modulated variable to test its efficacy inducing emotions. We have three hypotheses: (H1) Positive body language reinforces the positive attitude of the Velten positive statements; (H2) Body language which expresses the opposite attitude that the one expressed by Velten statements, that is a negative attitude, can vary negatively the mood induction results; (H3) The more positive the body language is, the higher the positive induction effect is; We have run experiments with 48 volunteers to test these hypotheses. Results show that the hypotheses (H1) and (H2) are correct, but it is not the case for (H3), which is not confirmed with an exaggerated expression of elated mood. Furthermore, our new combined MIP has a significant effect size to induce positive emotions.
本文的目的是研究人形机器人的肢体语言在诱导情绪方面的有效性。它基于积极心理学(PP)、社会学习、治疗机器人和情绪诱导程序(MIPs)的原则。根据诱导积极情绪的Velten方法,将人形机器人的肢体语言作为调节变量,测试其诱导情绪的效果。我们有三个假设:(H1)积极的肢体语言强化了Velten积极陈述的积极态度;(H2)与Velten语句表达相反态度的肢体语言,即消极态度,对情绪诱导结果有负向的影响;(H3)肢体语言越积极,正诱导效应越高;我们对48名志愿者进行了实验来验证这些假设。结果表明(H1)和(H2)的假设是正确的,但(H3)的情况并非如此,它没有被兴高采烈的夸张表达所证实。此外,我们的新组合MIP对诱导积极情绪具有显著的效应量。
{"title":"How important is body language in mood induction procedures with a humanoid robot?","authors":"Cristina Diaz-Montilla, A. P. D. Pobil","doi":"10.1109/ROMAN.2015.7333697","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333697","url":null,"abstract":"The aim of this article is to investigate the effectiveness of the body language of a humanoid robot to induce emotions. It is based on the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs). According to the Velten Method to induce a positive mood, the body language of a humanoid robot is used here as a modulated variable to test its efficacy inducing emotions. We have three hypotheses: (H1) Positive body language reinforces the positive attitude of the Velten positive statements; (H2) Body language which expresses the opposite attitude that the one expressed by Velten statements, that is a negative attitude, can vary negatively the mood induction results; (H3) The more positive the body language is, the higher the positive induction effect is; We have run experiments with 48 volunteers to test these hypotheses. Results show that the hypotheses (H1) and (H2) are correct, but it is not the case for (H3), which is not confirmed with an exaggerated expression of elated mood. Furthermore, our new combined MIP has a significant effect size to induce positive emotions.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122473711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Usability evaluation with different viewpoints of a Human-Swarm interface for UAVs control in formation 基于不同视点的无人机控制信息人群界面可用性评价
C. Recchiuto, A. Sgorbissa, R. Zaccaria
A common way to organize a high number of robots, both when moving autonomously and when controlled by a human operator, is to let them move in formation. This is a principle that takes inspiration from the nature, that maximizes the possibility of monitoring the environment and therefore of anticipating risks and finding targets. In robotics, alongside these reasons, the organization of a robot team in a formation allows a human operator to deal with a high number of agents in a simpler way, moving the swarm as a single entity. In this context, the typology of visual feedback is fundamental for a correct situational awareness, but in common practice having an optimal camera configuration is not always possible. Usually human operators use cameras on board the multirotors, with an egocentric point of view, while it is known that in mobile robotics overall awareness and pattern recognition are optimized by exocentric views. In this article we present an analysis of the performance achieved by human operators controlling a swarm of UAVs in formation, accomplishing different tasks and using different point of views. The control architecture is implemented in a ROS framework and interfaced with a 3D simulation environment. Experimental tests show a degradation of performance while using egocentric cameras with respect of an exocentric point of view, although cameras on board the robots allow to satisfactorily accomplish simple tasks.
组织大量机器人的一种常用方法,无论是自主移动还是由人类操作员控制,都是让它们排成队形移动。这是一个从自然中汲取灵感的原则,它最大限度地提高了监测环境的可能性,从而预测风险并找到目标。在机器人技术中,除了这些原因外,一个机器人团队的组织形式允许人类操作员以更简单的方式处理大量的代理,将群体作为一个单一的实体移动。在这种情况下,视觉反馈的类型是正确的态势感知的基础,但在通常的实践中,拥有最佳的相机配置并不总是可能的。通常,人类操作员在多旋翼上使用摄像机,以自我为中心的视角,而众所周知,在移动机器人中,整体感知和模式识别是通过外中心视角来优化的。在这篇文章中,我们提出了一个性能的分析,人类操作员控制一群无人机编队,完成不同的任务,并使用不同的观点。控制体系结构在ROS框架中实现,并与3D仿真环境接口。实验测试表明,在使用以自我为中心的相机时,相对于外心视角,性能会下降,尽管机器人上的相机可以令人满意地完成简单的任务。
{"title":"Usability evaluation with different viewpoints of a Human-Swarm interface for UAVs control in formation","authors":"C. Recchiuto, A. Sgorbissa, R. Zaccaria","doi":"10.1109/ROMAN.2015.7333638","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333638","url":null,"abstract":"A common way to organize a high number of robots, both when moving autonomously and when controlled by a human operator, is to let them move in formation. This is a principle that takes inspiration from the nature, that maximizes the possibility of monitoring the environment and therefore of anticipating risks and finding targets. In robotics, alongside these reasons, the organization of a robot team in a formation allows a human operator to deal with a high number of agents in a simpler way, moving the swarm as a single entity. In this context, the typology of visual feedback is fundamental for a correct situational awareness, but in common practice having an optimal camera configuration is not always possible. Usually human operators use cameras on board the multirotors, with an egocentric point of view, while it is known that in mobile robotics overall awareness and pattern recognition are optimized by exocentric views. In this article we present an analysis of the performance achieved by human operators controlling a swarm of UAVs in formation, accomplishing different tasks and using different point of views. The control architecture is implemented in a ROS framework and interfaced with a 3D simulation environment. Experimental tests show a degradation of performance while using egocentric cameras with respect of an exocentric point of view, although cameras on board the robots allow to satisfactorily accomplish simple tasks.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
When will people regard robots as morally competent social partners? 人们什么时候会把机器人视为道德上有能力的社会伙伴?
B. Malle, Matthias Scheutz
We propose that moral competence consists of five distinct but related elements: (1) having a system of norms; (2) mastering a moral vocabulary; (3) exhibiting moral cognition and affect; (4) exhibiting moral decision making and action; and (5) engaging in moral communication. We identify some of the likely triggers that may convince people to (justifiably) ascribe each of these elements of moral competence to robots. We suggest that humans will treat robots as moral agents (who have some rights, obligations, and are targets of blame) if they perceive them to have at least elements (1) and (2) and one or more of elements (3)-(5).
我们认为道德能力由五个不同但又相关的要素组成:(1)拥有一套规范体系;(2)掌握道德词汇;(3)表现出道德认知和情感;(4)表现出道德决策和行为;(5)进行道德沟通。我们确定了一些可能的触发因素,这些触发因素可能会说服人们(合理地)将这些道德能力的每一个要素都归因于机器人。我们建议,如果人类认为机器人至少具备要素(1)和(2)以及要素(3)-(5)中的一个或多个,那么他们将把机器人视为道德代理人(有一些权利、义务和指责对象)。
{"title":"When will people regard robots as morally competent social partners?","authors":"B. Malle, Matthias Scheutz","doi":"10.1109/ROMAN.2015.7333667","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333667","url":null,"abstract":"We propose that moral competence consists of five distinct but related elements: (1) having a system of norms; (2) mastering a moral vocabulary; (3) exhibiting moral cognition and affect; (4) exhibiting moral decision making and action; and (5) engaging in moral communication. We identify some of the likely triggers that may convince people to (justifiably) ascribe each of these elements of moral competence to robots. We suggest that humans will treat robots as moral agents (who have some rights, obligations, and are targets of blame) if they perceive them to have at least elements (1) and (2) and one or more of elements (3)-(5).","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131477310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Design and implementation of multi-dimensional flexible antena-like hair motivated by ‘Aho-Hair’ in Japanese anime cartoons: Internal state expressions beyond design limitations 受日本动漫“阿胡-毛”启发的多维柔性天线状毛发的设计与实现:超越设计限制的内部状态表达
Kazuhiro Sasabuchi, Youhei Kakiuchi, K. Okada, M. Inaba
Recent research in psychology argue the importance of “context” in emotion perception. According to these recent studies, facial expressions do not possess discrete emotional meanings; rather the meaning depends on the social situation of how and when the expressions are used. These research results imply that the emotion expressivity depends on the appropriate combination of context and expression, and not the distinctiveness of the expressions themselves. Therefore, it is inferable that relying on facial expressions may not be essential. Instead, when appropriate pairs of context and expression are applied, emotional internal states perhaps emerge. This paper first discusses how facial expressions of robots limit their head design, and can be hardware costly. Then, the paper proposes a way of expressing context-based emotions as an alternative to facial expressions. The paper introduces the mechanical structure for applying a specific non-facial contextual expression. The expression was originated from Japanese animation, and the mechanism was applied to a real desktop size humanoid robot. Finally, an experiment on whether the contextual expression is capable of linking humanoid motions and its emotional internal states was conducted under a sound-context condition. Although the results are limited in cultural aspects, this paper presents the possibilities of future robotic interface for emotion-expressive and interactive humanoid robots.
最近的心理学研究论证了“情境”在情绪感知中的重要性。根据这些最近的研究,面部表情并不具有离散的情感含义;相反,其含义取决于如何以及何时使用这些表达的社会情境。这些研究结果表明,情绪的表达能力取决于语境和表情的适当结合,而不是表情本身的独特性。因此,可以推断,依赖面部表情可能不是必要的。相反,当适当的语境和表达被应用时,情绪的内在状态可能会出现。本文首先讨论了机器人的面部表情如何限制它们的头部设计,并且可能是昂贵的硬件。然后,本文提出了一种基于情境的情感表达方式,作为面部表情的替代方法。本文介绍了应用特定的非面部语境表达的机械结构。该表达来源于日本动画,并将该机构应用于真实桌面大小的人形机器人。最后,在声音-语境条件下,对语境表达是否能够将类人动作与其情感内部状态联系起来进行了实验。虽然结果在文化方面受到限制,但本文提出了未来机器人界面的可能性,用于情感表达和交互式人形机器人。
{"title":"Design and implementation of multi-dimensional flexible antena-like hair motivated by ‘Aho-Hair’ in Japanese anime cartoons: Internal state expressions beyond design limitations","authors":"Kazuhiro Sasabuchi, Youhei Kakiuchi, K. Okada, M. Inaba","doi":"10.1109/ROMAN.2015.7333682","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333682","url":null,"abstract":"Recent research in psychology argue the importance of “context” in emotion perception. According to these recent studies, facial expressions do not possess discrete emotional meanings; rather the meaning depends on the social situation of how and when the expressions are used. These research results imply that the emotion expressivity depends on the appropriate combination of context and expression, and not the distinctiveness of the expressions themselves. Therefore, it is inferable that relying on facial expressions may not be essential. Instead, when appropriate pairs of context and expression are applied, emotional internal states perhaps emerge. This paper first discusses how facial expressions of robots limit their head design, and can be hardware costly. Then, the paper proposes a way of expressing context-based emotions as an alternative to facial expressions. The paper introduces the mechanical structure for applying a specific non-facial contextual expression. The expression was originated from Japanese animation, and the mechanism was applied to a real desktop size humanoid robot. Finally, an experiment on whether the contextual expression is capable of linking humanoid motions and its emotional internal states was conducted under a sound-context condition. Although the results are limited in cultural aspects, this paper presents the possibilities of future robotic interface for emotion-expressive and interactive humanoid robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131686026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Predictors of psychological anthropomorphization, mind perception, and the fulfillment of social needs: A case study with a zoomorphic robot 心理拟人化、心灵感知和社会需求满足的预测因素:动物拟人化机器人的案例研究
F. Eyssel, Michaela Pfundmair
We conducted a human-robot interaction (HRI) experiment in which we tested the effect of inclusionary status (social inclusion vs. social exclusion) and a dispositional correlate of anthropomorphism on social needs fulfillment and the evaluation of a social robot, respectively. The experiment was initiated by an interaction phase including free play between the user and the zoomorphic robot Pleo. This was followed by the experimental manipulation according to which participants were exposed to an experience of social inclusion or social exclusion during a computer game. Subsequently, participants evaluated the robot regarding psychological anthropomorphism, mind perception, and reported the experienced fulfillment of social needs as well as their individual disposition to anthropomorphize. The present research aimed at demonstrating that situationally induced inclusionary status should predominantly influence experienced social needs fulfillment, but not anthropomorphic inferences about a robot. Analogously, we presumed that evaluations of the robot should mainly be driven by the individual disposition to anthropomorphize nonhuman entities, whereas inclusionary status should not affect these judgments. As predicted, inclusionary status only affected experienced social needs fulfillment, whereas the experimental manipulation did not affect robot-related evaluations. In a similar vein, participants low (vs. high) in anthropomorphism differed in their assessment of humanity and mind perception of the robot prototype, whereas inclusionary status did not affect these anthropomorphic inferences. Results are discussed in light of the existing literature on social exclusion, social needs fulfillment, and anthropomorphization of robots.
我们进行了一项人机交互(HRI)实验,在该实验中,我们分别测试了包容地位(社会包容与社会排斥)和拟人化对社会需求满足和社会机器人评价的性格相关性的影响。实验开始于一个互动阶段,包括用户和动物机器人Pleo之间的自由游戏。接下来是实验操作,根据实验操作,参与者在玩电脑游戏时暴露在社会包容或社会排斥的体验中。随后,参与者对机器人的心理拟人化、心灵感知进行了评估,并报告了他们对社会需求的体验满足以及他们对拟人化的个人倾向。本研究旨在证明情境诱导的包容状态应该主要影响经验社会需求的实现,但不影响对机器人的拟人化推理。类似地,我们假设对机器人的评价应该主要是由个体将非人类实体拟人化的倾向驱动的,而包容性状态不应该影响这些判断。正如预测的那样,包容状态只影响经验社会需求的满足,而实验操作不影响机器人相关的评估。同样,拟人化程度低(与拟人化程度高)的参与者对机器人原型的人性和心智感知的评估也存在差异,而包容状态并不影响这些拟人化推断。根据现有文献对社会排斥、社会需求满足和机器人拟人化的研究结果进行了讨论。
{"title":"Predictors of psychological anthropomorphization, mind perception, and the fulfillment of social needs: A case study with a zoomorphic robot","authors":"F. Eyssel, Michaela Pfundmair","doi":"10.1109/ROMAN.2015.7333647","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333647","url":null,"abstract":"We conducted a human-robot interaction (HRI) experiment in which we tested the effect of inclusionary status (social inclusion vs. social exclusion) and a dispositional correlate of anthropomorphism on social needs fulfillment and the evaluation of a social robot, respectively. The experiment was initiated by an interaction phase including free play between the user and the zoomorphic robot Pleo. This was followed by the experimental manipulation according to which participants were exposed to an experience of social inclusion or social exclusion during a computer game. Subsequently, participants evaluated the robot regarding psychological anthropomorphism, mind perception, and reported the experienced fulfillment of social needs as well as their individual disposition to anthropomorphize. The present research aimed at demonstrating that situationally induced inclusionary status should predominantly influence experienced social needs fulfillment, but not anthropomorphic inferences about a robot. Analogously, we presumed that evaluations of the robot should mainly be driven by the individual disposition to anthropomorphize nonhuman entities, whereas inclusionary status should not affect these judgments. As predicted, inclusionary status only affected experienced social needs fulfillment, whereas the experimental manipulation did not affect robot-related evaluations. In a similar vein, participants low (vs. high) in anthropomorphism differed in their assessment of humanity and mind perception of the robot prototype, whereas inclusionary status did not affect these anthropomorphic inferences. Results are discussed in light of the existing literature on social exclusion, social needs fulfillment, and anthropomorphization of robots.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132093507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Floor estimation by a wearable travel aid for visually impaired 通过为视障人士提供的可穿戴旅行辅助设备来估计楼层
Hiromi Watanabe, T. Tanzawa, Tsuyoshi Shimizu, S. Kotani
During a disaster, it may be difficult for the visually impaired to use infrastructure or to obtain care worker's support. To support the independence of the visually impaired, we have developed a wearable travel aid that assists in safe travel, including navigating stairs, without special infrastructure. Because safe navigation requires accurate position estimation, our system fuses data from different sensors to estimate a pedestrian's position. In this paper, we propose a floor estimation method based on fuzzy inference with a laser range finder (LRF). When walking on a floor, an image processing system and a LRF are used together to estimate the position, but when walking on stairs, a second LRF is used to estimate the floor and detect obstacles. The experimental results show that our floor estimation is also useful for position estimation.
在灾难期间,视障人士可能难以使用基础设施或获得护理人员的支持。为了支持视障人士的独立,我们开发了一种可穿戴的旅行辅助设备,帮助他们安全旅行,包括在没有特殊基础设施的情况下导航楼梯。由于安全导航需要精确的位置估计,我们的系统融合了来自不同传感器的数据来估计行人的位置。本文提出了一种基于模糊推理的激光测距仪底波估计方法。当在地板上行走时,图像处理系统和LRF一起用于估计位置,但当在楼梯上行走时,第二个LRF用于估计地板并检测障碍物。实验结果表明,本方法对位置估计也很有用。
{"title":"Floor estimation by a wearable travel aid for visually impaired","authors":"Hiromi Watanabe, T. Tanzawa, Tsuyoshi Shimizu, S. Kotani","doi":"10.1109/ROMAN.2015.7333581","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333581","url":null,"abstract":"During a disaster, it may be difficult for the visually impaired to use infrastructure or to obtain care worker's support. To support the independence of the visually impaired, we have developed a wearable travel aid that assists in safe travel, including navigating stairs, without special infrastructure. Because safe navigation requires accurate position estimation, our system fuses data from different sensors to estimate a pedestrian's position. In this paper, we propose a floor estimation method based on fuzzy inference with a laser range finder (LRF). When walking on a floor, an image processing system and a LRF are used together to estimate the position, but when walking on stairs, a second LRF is used to estimate the floor and detect obstacles. The experimental results show that our floor estimation is also useful for position estimation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114683474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social and empathic behaviours: Novel interfaces and interaction modalities 社会和移情行为:新的界面和交互方式
P. Marti, I. Iacono
This paper describes the results of a research conducted in the European project Accompany, whose aim is to provide older people with services in a motivating and socially acceptable manner to facilitate independent living at home. The project developed a system consisting of a robotic companion, Care-O-bot, as part of a smart environment. An intensive research was conducted to investigate and experiment with robot behaviours that trigger empathic exchanges between an older person and the robot. The paper is articulated in two parts. The first part illustrates the theory that inspired the development of a context-aware Graphical User Interface (GUI) used to interact with the robot. The GUI integrates an expressive mask allowing perspective taking with the aim to stimulate empathic exchanges. The second part focuses on the user evaluation, and reports the outcomes from three different tests. The results of the first two tests show a positive acceptance of the GUI by the older people. The final test reports qualitative comments by senior participants on the occurrence of empathic exchanges with the robot.
本文描述了在欧洲项目陪伴中进行的一项研究的结果,该项目的目的是以一种激励和社会可接受的方式向老年人提供服务,以促进他们在家中独立生活。该项目开发了一个由机器人伴侣Care-O-bot组成的系统,作为智能环境的一部分。研究人员进行了一项深入的研究,以调查和实验机器人的行为,引发老年人和机器人之间的移情交流。本文分为两部分。第一部分说明了启发开发用于与机器人交互的上下文感知图形用户界面(GUI)的理论。GUI集成了一个富有表现力的面具,允许透视,目的是刺激移情交流。第二部分侧重于用户评估,并报告了三个不同测试的结果。前两次测试的结果表明,老年人对GUI有积极的接受。最后的测试报告了高级参与者对与机器人发生共情交流的定性评论。
{"title":"Social and empathic behaviours: Novel interfaces and interaction modalities","authors":"P. Marti, I. Iacono","doi":"10.1109/ROMAN.2015.7333634","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333634","url":null,"abstract":"This paper describes the results of a research conducted in the European project Accompany, whose aim is to provide older people with services in a motivating and socially acceptable manner to facilitate independent living at home. The project developed a system consisting of a robotic companion, Care-O-bot, as part of a smart environment. An intensive research was conducted to investigate and experiment with robot behaviours that trigger empathic exchanges between an older person and the robot. The paper is articulated in two parts. The first part illustrates the theory that inspired the development of a context-aware Graphical User Interface (GUI) used to interact with the robot. The GUI integrates an expressive mask allowing perspective taking with the aim to stimulate empathic exchanges. The second part focuses on the user evaluation, and reports the outcomes from three different tests. The results of the first two tests show a positive acceptance of the GUI by the older people. The final test reports qualitative comments by senior participants on the occurrence of empathic exchanges with the robot.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123074082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Augmented reality robot navigation using infrared marker 基于红外标记的增强现实机器人导航
Ryotaro Kuriya, T. Tsujimura, K. Izumi
A new augmented reality system is proposed to apply to robot navigation in this paper. It generates pattern IDs by analyzing infrared optical markers projected in the real world. The system enables the robot to guide a complicated motion by the robot commands corresponding to the pattern IDs. A prototype system is constructed to conduct fundamental experiments of remote robot navigation.
提出了一种应用于机器人导航的新型增强现实系统。它通过分析投射在现实世界中的红外光学标记来生成模式id。该系统使机器人能够通过与模式id相对应的机器人指令来引导复杂的运动。构建了一个原型系统,进行了机器人远程导航的基础实验。
{"title":"Augmented reality robot navigation using infrared marker","authors":"Ryotaro Kuriya, T. Tsujimura, K. Izumi","doi":"10.1109/ROMAN.2015.7333607","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333607","url":null,"abstract":"A new augmented reality system is proposed to apply to robot navigation in this paper. It generates pattern IDs by analyzing infrared optical markers projected in the real world. The system enables the robot to guide a complicated motion by the robot commands corresponding to the pattern IDs. A prototype system is constructed to conduct fundamental experiments of remote robot navigation.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130190471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Probabilistic modeling of mental models of others 对他人心理模型的概率建模
T. Nagai, Kasumi Abe, Tomoaki Nakamura, N. Oka, T. Omori
Intimacy is a very important factor not only for the communication between humans but also for the communication between humans and robots. In this research we propose an action decision model based on others' friendliness value, which can be estimated using the mental models of others. We examine the mutual adaptation process of two agents, each of which has its own model of others, through the interaction between them.
亲密是一个非常重要的因素,不仅对于人与人之间的交流,而且对于人与机器人之间的交流。在本研究中,我们提出了一个基于他人友好价值的行为决策模型,该模型可以通过他人的心理模型来评估。我们研究了两个代理的相互适应过程,每个代理都有自己的模型,通过它们之间的相互作用。
{"title":"Probabilistic modeling of mental models of others","authors":"T. Nagai, Kasumi Abe, Tomoaki Nakamura, N. Oka, T. Omori","doi":"10.1109/ROMAN.2015.7333635","DOIUrl":"https://doi.org/10.1109/ROMAN.2015.7333635","url":null,"abstract":"Intimacy is a very important factor not only for the communication between humans but also for the communication between humans and robots. In this research we propose an action decision model based on others' friendliness value, which can be estimated using the mental models of others. We examine the mutual adaptation process of two agents, each of which has its own model of others, through the interaction between them.","PeriodicalId":119467,"journal":{"name":"2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)","volume":"53 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122478794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1