首页 > 最新文献

2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)最新文献

英文 中文
Evoking an Intentional Stance during Human-Agent Social Interaction: Appearances Can Be Deceiving 在人-代理社会互动中唤起一个有意的立场:表象可能是欺骗性的
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515420
Casey C. Bennett
A critical issue during human-agent and human-robot interaction is eliciting an intentional stance in the human interactor, whereas the human perceives the agent as a fully "intelligent" being with full agency towards their own intentions and desires. Eliciting such a stance, however, has proven elusive, despite work in cognitive science, robotics, and human-computer interaction over the past half-century. Here, we argue for a paradigm shift in our approach to this problem, based on a synthesis of recent evidence from social robotics and digital avatars. In short, in order to trigger an intentional stance in humans, perhaps our artificial agents need to adopt one about themselves.
在人-代理和人-机器人交互过程中,一个关键问题是在人类交互者中引出一个有意识的立场,而人类将代理视为一个完全“智能”的存在,对自己的意图和欲望有完全的代理。然而,尽管在过去的半个世纪里,人们在认知科学、机器人技术和人机交互领域做出了很多努力,但事实证明,激发这样的立场是难以捉摸的。在这里,我们基于社交机器人和数字化身的最新证据,主张在解决这个问题的方法上进行范式转变。简而言之,为了触发人类有意识的立场,也许我们的人工代理需要采取一种关于他们自己的立场。
{"title":"Evoking an Intentional Stance during Human-Agent Social Interaction: Appearances Can Be Deceiving","authors":"Casey C. Bennett","doi":"10.1109/RO-MAN50785.2021.9515420","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515420","url":null,"abstract":"A critical issue during human-agent and human-robot interaction is eliciting an intentional stance in the human interactor, whereas the human perceives the agent as a fully \"intelligent\" being with full agency towards their own intentions and desires. Eliciting such a stance, however, has proven elusive, despite work in cognitive science, robotics, and human-computer interaction over the past half-century. Here, we argue for a paradigm shift in our approach to this problem, based on a synthesis of recent evidence from social robotics and digital avatars. In short, in order to trigger an intentional stance in humans, perhaps our artificial agents need to adopt one about themselves.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"46 1","pages":"362-368"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89155953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Connecting Humans and Robots Using Physiological Signals – Closing-the-Loop in HRI 利用生理信号连接人类和机器人——HRI中的闭环
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515383
Austin Kothig, J. Muñoz, S. Akgun, A. M. Aroyo, K. Dautenhahn
Technological advancements in creating and commercializing novel unobtrusive and wearable physiological sensors generate new opportunities to develop adaptive human-robot interaction (HRI) scenarios. Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to create meaningful interactive experiences. Despite being widely used to explain human behaviors in post-interaction analysis with social agents, using bodily signals to create more adaptive and responsive systems remains an open challenge. This paper presents the development of an open-source, integrative, and modular library created to facilitate the design of physiologically adaptive HRI scenarios. The HRI Physio Lib streamlines the acquisition, analysis, and translation of human body signals to additional dimensions of perception in HRI applications using social robots. The software framework has four main components: signal acquisition, processing and analysis, social robot and communication, and scenario and adaptation. Information gathered from the sensors is synchronized and processed to allow designers to create adaptive systems that can respond to detected human states. This paper describes the library and presents a use case that uses a humanoid robot as a cardio-aware exercise coach that uses heartbeats to adapt the exercise intensity to maximize cardiovascular performance. The main challenges, lessons learned, scalability of the library, and implications of the physio-adaptive coach are discussed.
在创造和商业化新颖的不显眼和可穿戴的生理传感器的技术进步为开发自适应人机交互(HRI)场景带来了新的机会。在与社会主体互动时,检测复杂的人类状态(如参与度和压力)可以为创造有意义的互动体验带来许多优势。尽管在与社会主体的互动后分析中被广泛用于解释人类行为,但使用身体信号来创建更具适应性和反应性的系统仍然是一个开放的挑战。本文介绍了一个开源、集成和模块化库的开发,旨在促进生理适应性HRI场景的设计。HRI Physio Lib简化了使用社交机器人的HRI应用程序中人体信号的获取,分析和翻译,以获得额外的感知维度。软件框架主要包括四个部分:信号采集、处理与分析、社交机器人与通信、场景与适配。从传感器收集的信息被同步和处理,使设计人员能够创建自适应系统,以响应检测到的人类状态。本文描述了该库,并提出了一个用例,该用例使用人形机器人作为心脏感知运动教练,使用心跳来适应运动强度,以最大限度地提高心血管性能。讨论了主要挑战、经验教训、库的可扩展性以及物理适应性教练的含义。
{"title":"Connecting Humans and Robots Using Physiological Signals – Closing-the-Loop in HRI","authors":"Austin Kothig, J. Muñoz, S. Akgun, A. M. Aroyo, K. Dautenhahn","doi":"10.1109/RO-MAN50785.2021.9515383","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515383","url":null,"abstract":"Technological advancements in creating and commercializing novel unobtrusive and wearable physiological sensors generate new opportunities to develop adaptive human-robot interaction (HRI) scenarios. Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to create meaningful interactive experiences. Despite being widely used to explain human behaviors in post-interaction analysis with social agents, using bodily signals to create more adaptive and responsive systems remains an open challenge. This paper presents the development of an open-source, integrative, and modular library created to facilitate the design of physiologically adaptive HRI scenarios. The HRI Physio Lib streamlines the acquisition, analysis, and translation of human body signals to additional dimensions of perception in HRI applications using social robots. The software framework has four main components: signal acquisition, processing and analysis, social robot and communication, and scenario and adaptation. Information gathered from the sensors is synchronized and processed to allow designers to create adaptive systems that can respond to detected human states. This paper describes the library and presents a use case that uses a humanoid robot as a cardio-aware exercise coach that uses heartbeats to adapt the exercise intensity to maximize cardiovascular performance. The main challenges, lessons learned, scalability of the library, and implications of the physio-adaptive coach are discussed.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"735-742"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80082655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Basic Study for Acceptance of Robots as Meal Partners: Number of Robots During Mealtime, Frequency of Solitary Eating, and Past Experience with Robots 接受机器人作为用餐伙伴的基础研究:用餐时机器人的数量,单独用餐的频率,以及过去与机器人的经验
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515451
Ayaka Fujii, K. Okada, M. Inaba
Due to the recent lifestyle changes, instances of people eating alone have been increasing. We think robots can be good meal partners without having to risk disease transmission. Furthermore, people are able to eat with robots without worrying about mealtimes. In this study, we determine who are more likely to accept robots as eating partners and compare eating with a single robot to eating with multiple robots. The results revealed that people who have vast experience in interacting with robots and those who have relatively few opportunities to eat alone felt better about eating with robots, whereas those who have numerous opportunities to eat alone enjoyed eating with multiple robots.
由于最近生活方式的改变,人们独自吃饭的例子越来越多。我们认为机器人可以成为很好的用餐伙伴,而不必冒着传播疾病的风险。此外,人们可以和机器人一起吃饭,而不用担心吃饭时间。在这项研究中,我们确定了谁更有可能接受机器人作为吃饭伙伴,并比较了与单个机器人一起吃饭和与多个机器人一起吃饭。结果显示,那些与机器人互动经验丰富的人和那些相对较少有机会独自吃饭的人对与机器人一起吃饭感觉更好,而那些有很多机会独自吃饭的人则喜欢与多个机器人一起吃饭。
{"title":"A Basic Study for Acceptance of Robots as Meal Partners: Number of Robots During Mealtime, Frequency of Solitary Eating, and Past Experience with Robots","authors":"Ayaka Fujii, K. Okada, M. Inaba","doi":"10.1109/RO-MAN50785.2021.9515451","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515451","url":null,"abstract":"Due to the recent lifestyle changes, instances of people eating alone have been increasing. We think robots can be good meal partners without having to risk disease transmission. Furthermore, people are able to eat with robots without worrying about mealtimes. In this study, we determine who are more likely to accept robots as eating partners and compare eating with a single robot to eating with multiple robots. The results revealed that people who have vast experience in interacting with robots and those who have relatively few opportunities to eat alone felt better about eating with robots, whereas those who have numerous opportunities to eat alone enjoyed eating with multiple robots.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"73-80"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81228253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Interactive Vignettes: Enabling Large-Scale Interactive HRI Research 交互式小片段:实现大规模交互式HRI研究
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515376
Wen-Ying Lee, Mose Sakashita, E. Ricci, Houston Claure, François Guimbretière, Malte F. Jung
We propose the use of interactive vignettes as an alternative to traditional text- and video-based vignettes for conducting large-scale Human-Robot Interaction (HRI) studies. Interactive vignettes maintain the advantages of traditional vignettes while offering additional affordances for participant interaction and data collection through interactive elements. We discuss the core affordances of interactive vignettes, including explorability, responsiveness, and non-linearity, and look into how these affordances can enable HRI research with more complex scenarios. To demonstrate the strength of the approach, we present a case study of our own research project with N=87 participants and show the data we collect through interactive vignettes. We suggest that the use of interactive vignettes can benefit HRI researchers in learning how participants interact with, respond to, and perceive a robot’s behavior in pre-defined scenarios.
我们建议使用交互式小片段来替代传统的基于文本和视频的小片段来进行大规模的人机交互(HRI)研究。交互式小片段保留了传统小片段的优点,同时通过交互式元素为参与者交互和数据收集提供了额外的支持。我们讨论了交互式小片段的核心功能,包括可探索性、响应性和非线性,并研究了这些功能如何使HRI研究具有更复杂的场景。为了证明这种方法的优势,我们提出了一个我们自己的研究项目的案例研究,其中有N=87名参与者,并展示了我们通过互动小插曲收集的数据。我们建议使用交互式小片段可以使HRI研究人员了解参与者如何在预定义的场景中与机器人互动、响应和感知机器人的行为。
{"title":"Interactive Vignettes: Enabling Large-Scale Interactive HRI Research","authors":"Wen-Ying Lee, Mose Sakashita, E. Ricci, Houston Claure, François Guimbretière, Malte F. Jung","doi":"10.1109/RO-MAN50785.2021.9515376","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515376","url":null,"abstract":"We propose the use of interactive vignettes as an alternative to traditional text- and video-based vignettes for conducting large-scale Human-Robot Interaction (HRI) studies. Interactive vignettes maintain the advantages of traditional vignettes while offering additional affordances for participant interaction and data collection through interactive elements. We discuss the core affordances of interactive vignettes, including explorability, responsiveness, and non-linearity, and look into how these affordances can enable HRI research with more complex scenarios. To demonstrate the strength of the approach, we present a case study of our own research project with N=87 participants and show the data we collect through interactive vignettes. We suggest that the use of interactive vignettes can benefit HRI researchers in learning how participants interact with, respond to, and perceive a robot’s behavior in pre-defined scenarios.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"96 1","pages":"1289-1296"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77191414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Modeling human-like robot personalities as a key to foster socially aware navigation * 模拟类人机器人的个性是培养社会意识导航的关键*
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515556
Alessandra Sorrentino, O. Khalid, Luigi Coviello, F. Cavallo, L. Fiorini
This work aims to investigate if a "robot's personality" can affect the social perception of the robot in the navigation task. To this end, we implemented a dedicated human-aware navigation system that adapts the configuration of the navigation parameters (i.e. proxemics and velocity) based on two different human-like personalities, extrovert (EXT) and introvert (INT), and we compared them with a no social behavior (NS). We evaluated the system in a dynamic scenario in which each participant needed to pass by a robot moving in the opposite direction, showing a different personality each time. The Eysenck Personality Inventory and a modified version of the Godspeed questionnaire were administered to assess the user’s and the perceived robot’s personalities, respectively. The results show that 19 out of 20 subjects involved in the study perceived a difference among the personalities exhibited by the robot, both in terms of proxemics and velocity. Furthermore, the results highlight a general preference of a complementary robot’s personality, helping to suggest some guidelines for future works in the human-aware navigation field.
这项工作旨在研究“机器人的个性”是否会影响机器人在导航任务中的社会感知。为此,我们实现了一个专门的人类感知导航系统,该系统根据两种不同的类人性格,外向(EXT)和内向(INT),调整导航参数(即距离和速度)的配置,并将它们与无社会行为(NS)进行比较。我们在一个动态场景中评估了这个系统,在这个场景中,每个参与者都需要经过一个向相反方向移动的机器人,每次都表现出不同的个性。艾森克人格量表和改进版的Godspeed问卷分别用于评估使用者和被感知机器人的性格。结果显示,参与研究的20名受试者中,有19人认为机器人所展示的个性在近距和速度方面存在差异。此外,研究结果强调了对互补机器人个性的普遍偏好,有助于为人类感知导航领域的未来工作提出一些指导方针。
{"title":"Modeling human-like robot personalities as a key to foster socially aware navigation *","authors":"Alessandra Sorrentino, O. Khalid, Luigi Coviello, F. Cavallo, L. Fiorini","doi":"10.1109/RO-MAN50785.2021.9515556","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515556","url":null,"abstract":"This work aims to investigate if a \"robot's personality\" can affect the social perception of the robot in the navigation task. To this end, we implemented a dedicated human-aware navigation system that adapts the configuration of the navigation parameters (i.e. proxemics and velocity) based on two different human-like personalities, extrovert (EXT) and introvert (INT), and we compared them with a no social behavior (NS). We evaluated the system in a dynamic scenario in which each participant needed to pass by a robot moving in the opposite direction, showing a different personality each time. The Eysenck Personality Inventory and a modified version of the Godspeed questionnaire were administered to assess the user’s and the perceived robot’s personalities, respectively. The results show that 19 out of 20 subjects involved in the study perceived a difference among the personalities exhibited by the robot, both in terms of proxemics and velocity. Furthermore, the results highlight a general preference of a complementary robot’s personality, helping to suggest some guidelines for future works in the human-aware navigation field.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"132 1","pages":"95-101"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90319221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Out-of-Sight Predictive Tracking for Long-Term Indoor Navigation of Non-Holonomic Person Following Robot* 非完整人跟随机器人室内长期导航的视线外预测跟踪研究*
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515348
A. Ashe
The ability to predict the movements of the target person allows a person following robot (PFR) to coexist with the person while still complying with the social norms. In human-robot collaboration, this is an essential requisite for long-term time-dependent navigation and not losing sight of the person during momentary occlusions that may arise from a crowd due to static or dynamic obstacles, other human beings, or intersections in the local surrounding. The PFR must not only traverse to the previously unknown goal position but also relocate the target person after the miss, and resume following. In this paper, we try to solve this as a coupled motion-planning and control problem by formulating a model predictive control (MPC) controller with non-linear constraints for a wheeled differential-drive robot. And, using a human motion prediction strategy based on the recorded pose and trajectory information of both the moving target person and the PFR, add additional constraints to the same MPC, to recompute the optimal controls to the wheels. We make comparisons with RNNs like LSTM and Early Relocation for learning the best-predicted reference path.MPC is best suited for complex constrained problems because it allows the PFR to periodically update the tracking information, as well as to adapt to the moving person’s stride. We show the results using a simulated indoor environment and lay the foundation for its implementation on a real robot. Our proposed method offers a robust person following behaviour without the explicit need for policy learning or offline computation, allowing us to design a generalized framework.
预测目标人运动的能力允许人跟随机器人(PFR)与人共存,同时仍然遵守社会规范。在人机协作中,这是长期依赖于时间的导航的必要条件,并且在由于静态或动态障碍物、其他人或当地环境中的十字路口而引起的人群瞬间阻塞中不会失去对人的视线。PFR不仅要遍历到之前未知的目标位置,而且要在错过目标后重新定位目标人,并恢复跟踪。本文通过对轮式差动驱动机器人建立非线性约束模型预测控制(MPC)控制器,试图将其作为运动规划和控制的耦合问题来解决。然后,利用基于运动目标人和PFR记录的姿态和轨迹信息的人体运动预测策略,在同一MPC中添加附加约束,重新计算对车轮的最优控制。在学习最佳预测参考路径方面,我们与LSTM和Early Relocation等rnn进行了比较。MPC最适合于复杂的约束问题,因为它允许PFR周期性地更新跟踪信息,并适应移动人员的步幅。我们在模拟室内环境中展示了结果,为其在真实机器人上的实现奠定了基础。我们提出的方法提供了一个健壮的人跟踪行为,而不需要明确的策略学习或离线计算,允许我们设计一个广义框架。
{"title":"Towards Out-of-Sight Predictive Tracking for Long-Term Indoor Navigation of Non-Holonomic Person Following Robot*","authors":"A. Ashe","doi":"10.1109/RO-MAN50785.2021.9515348","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515348","url":null,"abstract":"The ability to predict the movements of the target person allows a person following robot (PFR) to coexist with the person while still complying with the social norms. In human-robot collaboration, this is an essential requisite for long-term time-dependent navigation and not losing sight of the person during momentary occlusions that may arise from a crowd due to static or dynamic obstacles, other human beings, or intersections in the local surrounding. The PFR must not only traverse to the previously unknown goal position but also relocate the target person after the miss, and resume following. In this paper, we try to solve this as a coupled motion-planning and control problem by formulating a model predictive control (MPC) controller with non-linear constraints for a wheeled differential-drive robot. And, using a human motion prediction strategy based on the recorded pose and trajectory information of both the moving target person and the PFR, add additional constraints to the same MPC, to recompute the optimal controls to the wheels. We make comparisons with RNNs like LSTM and Early Relocation for learning the best-predicted reference path.MPC is best suited for complex constrained problems because it allows the PFR to periodically update the tracking information, as well as to adapt to the moving person’s stride. We show the results using a simulated indoor environment and lay the foundation for its implementation on a real robot. Our proposed method offers a robust person following behaviour without the explicit need for policy learning or offline computation, allowing us to design a generalized framework.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"476-481"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90436817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation Differences in Perception of the Elderly Care Robot 老年护理机器人感知的代际差异
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515534
W. Khaksar, Margot M. E. Neggers, E. Barakova, J. Tørresen
Introducing robots in healthcare facilities and homes may reduce the workload of healthcare personnel while providing the users with better and more available services. It may also contribute to interactions that are engaging and safe against transmitting contagious diseases for senior adults. A major challenge in this regard is to design and adapt the robot’s behavior based on the requirements and preferences of the different users. In this paper, we report a conducted use study on how people perceive different kinds of robot encounters. We had two groups of target users: one with senior residents at a care center and another with young students at a university, which would be representative for the visitors and care volunteers in the facility. Several common scenarios have been created to evaluate the perception of the robot’s behavior by the participants. Two sets of questionnaires were used to collect feedback on the behavior and the general perception of the users about the robot´s different styles of behavior. An exploratory analysis of the effect of age shows that the age of the targeted user group should be considered as one of the main criteria when designing the social parameters of a care robot, as seniors preferred slower speed and closer distance to the robot. The results can contribute to improving a future robot’s control to better suit users from different generations.
在医疗机构和家庭中引入机器人可以减少医疗人员的工作量,同时为用户提供更好、更可用的服务。它还可能有助于老年人进行有吸引力和安全的互动,以防止传染病的传播。这方面的一个主要挑战是根据不同用户的要求和偏好来设计和调整机器人的行为。在本文中,我们报告了一项关于人们如何感知不同类型的机器人遭遇的研究。我们有两组目标用户:一组是护理中心的老年居民,另一组是大学里的年轻学生,他们将代表设施里的访客和护理志愿者。已经创建了几个常见的场景来评估参与者对机器人行为的感知。使用了两套调查问卷来收集对行为的反馈和用户对机器人不同行为风格的总体感知。年龄效应的探索性分析表明,在设计护理机器人的社交参数时,应将目标用户群体的年龄作为主要标准之一,因为老年人偏好较慢的速度和较近的距离。研究结果有助于改进未来机器人的控制,以更好地适应不同世代的用户。
{"title":"Generation Differences in Perception of the Elderly Care Robot","authors":"W. Khaksar, Margot M. E. Neggers, E. Barakova, J. Tørresen","doi":"10.1109/RO-MAN50785.2021.9515534","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515534","url":null,"abstract":"Introducing robots in healthcare facilities and homes may reduce the workload of healthcare personnel while providing the users with better and more available services. It may also contribute to interactions that are engaging and safe against transmitting contagious diseases for senior adults. A major challenge in this regard is to design and adapt the robot’s behavior based on the requirements and preferences of the different users. In this paper, we report a conducted use study on how people perceive different kinds of robot encounters. We had two groups of target users: one with senior residents at a care center and another with young students at a university, which would be representative for the visitors and care volunteers in the facility. Several common scenarios have been created to evaluate the perception of the robot’s behavior by the participants. Two sets of questionnaires were used to collect feedback on the behavior and the general perception of the users about the robot´s different styles of behavior. An exploratory analysis of the effect of age shows that the age of the targeted user group should be considered as one of the main criteria when designing the social parameters of a care robot, as seniors preferred slower speed and closer distance to the robot. The results can contribute to improving a future robot’s control to better suit users from different generations.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"150 1","pages":"551-558"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86033241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Virtual tactile texture using electrostatic friction display for natural materials: The role of low and high frequency textural stimuli 虚拟触觉纹理利用静电摩擦显示对天然材料:低频和高频纹理刺激的作用
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515405
Kazuya Otake, S. Okamoto, Yasuhiro Akiyama, Yoji Yamada
As touchscreens have become a standard feature in mobile devices, technologies for presenting tactile texture feedback on the panel have been attracting attention. We tested a new method for presenting natural materials using an electrostatic tactile texture display. In this method, the frictional forces are decomposed into low- and high-frequency components. The low-frequency component was modeled based on Coulomb’s friction law, such that the friction force was reactive to the finger’s normal force. The high-frequency component was modeled using an auto-regressive model to retain its features of frequency spectra. Four natural material types, representing leather, cork, denim, and drawing paper, were presented to six assessors using this method. In a condition where only the low-frequency friction force components were rendered, the materials were correctly recognized at 70%. In contrast, when the high-frequency components were superposed, this rate increased to 80%, although the difference was not statistically significant. Our approach to combine a physical friction model and frequency spectrum for low- and high-frequency components, respectively, allows people to recognize virtual natural materials rendered on touch panels.
随着触摸屏成为移动设备的标准功能,在面板上呈现触觉纹理反馈的技术一直备受关注。我们测试了一种使用静电触觉纹理显示来呈现天然材料的新方法。在该方法中,摩擦力被分解为低频和高频分量。低频分量根据库仑摩擦定律建模,摩擦力与手指的法向力成反作用。采用自回归模型对高频分量进行建模,以保持高频分量的频谱特征。四种天然材料类型,分别是皮革、软木、牛仔布和画纸,用这种方法呈现给六名评审员。在仅呈现低频摩擦力分量的情况下,材料的正确率为70%。相比之下,当高频成分叠加时,这一比例增加到80%,尽管差异没有统计学意义。我们的方法将物理摩擦模型和低频和高频组件的频谱相结合,使人们能够识别触摸面板上呈现的虚拟自然材料。
{"title":"Virtual tactile texture using electrostatic friction display for natural materials: The role of low and high frequency textural stimuli","authors":"Kazuya Otake, S. Okamoto, Yasuhiro Akiyama, Yoji Yamada","doi":"10.1109/RO-MAN50785.2021.9515405","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515405","url":null,"abstract":"As touchscreens have become a standard feature in mobile devices, technologies for presenting tactile texture feedback on the panel have been attracting attention. We tested a new method for presenting natural materials using an electrostatic tactile texture display. In this method, the frictional forces are decomposed into low- and high-frequency components. The low-frequency component was modeled based on Coulomb’s friction law, such that the friction force was reactive to the finger’s normal force. The high-frequency component was modeled using an auto-regressive model to retain its features of frequency spectra. Four natural material types, representing leather, cork, denim, and drawing paper, were presented to six assessors using this method. In a condition where only the low-frequency friction force components were rendered, the materials were correctly recognized at 70%. In contrast, when the high-frequency components were superposed, this rate increased to 80%, although the difference was not statistically significant. Our approach to combine a physical friction model and frequency spectrum for low- and high-frequency components, respectively, allows people to recognize virtual natural materials rendered on touch panels.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"392-397"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91100643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Predicted information gain and convolutional neural network for prediction of gait periods using a wearable sensors network 使用可穿戴传感器网络预测信息增益和卷积神经网络预测步态周期
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515395
Uriel Martinez-Hernandez, Adrian Rubio-Solis
This work presents a method for recognition of walking activities and prediction of gait periods using wearable sensors. First, a Convolutional Neural Network (CNN) is used to recognise the walking activity and gait period. Second, the output of the CNN is used by a Predicted Information Gain (PIG) method to predict the next most probable gait period while walking. The output of these two processes are combined to adapt the recognition accuracy of the system. This adaptive combination allows us to achieve an optimal recognition accuracy over time. The validation of this work is performed with an array of wearable sensors for the recognition of level-ground walking, ramp ascent and ramp descent, and prediction of gait periods. The results show that the proposed system can achieve accuracies of 100% and 99.9% for recognition of walking activity and gait period, respectively. These results show the benefit of having a system capable of predicting or anticipating the next information or event over time. Overall, this approach offers a method for accurate activity recognition, which is a key process for the development of wearable robots capable of safely assist humans in activities of daily living.
这项工作提出了一种使用可穿戴传感器识别步行活动和预测步态周期的方法。首先,使用卷积神经网络(CNN)识别步行活动和步态周期。其次,CNN的输出被预测信息增益(猪)方法用于预测下一个最可能的步态周期。将这两个过程的输出相结合,以适应系统的识别精度。随着时间的推移,这种自适应组合使我们能够获得最佳的识别精度。这项工作的验证是通过一系列可穿戴传感器来进行的,这些传感器用于识别平地行走、斜坡上升和斜坡下降,并预测步态周期。结果表明,该系统对行走活动和步态周期的识别准确率分别达到100%和99.9%。这些结果表明,随着时间的推移,拥有一个能够预测或预测下一个信息或事件的系统的好处。总的来说,这种方法提供了一种准确的活动识别方法,这是开发能够安全协助人类日常生活活动的可穿戴机器人的关键过程。
{"title":"Predicted information gain and convolutional neural network for prediction of gait periods using a wearable sensors network","authors":"Uriel Martinez-Hernandez, Adrian Rubio-Solis","doi":"10.1109/RO-MAN50785.2021.9515395","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515395","url":null,"abstract":"This work presents a method for recognition of walking activities and prediction of gait periods using wearable sensors. First, a Convolutional Neural Network (CNN) is used to recognise the walking activity and gait period. Second, the output of the CNN is used by a Predicted Information Gain (PIG) method to predict the next most probable gait period while walking. The output of these two processes are combined to adapt the recognition accuracy of the system. This adaptive combination allows us to achieve an optimal recognition accuracy over time. The validation of this work is performed with an array of wearable sensors for the recognition of level-ground walking, ramp ascent and ramp descent, and prediction of gait periods. The results show that the proposed system can achieve accuracies of 100% and 99.9% for recognition of walking activity and gait period, respectively. These results show the benefit of having a system capable of predicting or anticipating the next information or event over time. Overall, this approach offers a method for accurate activity recognition, which is a key process for the development of wearable robots capable of safely assist humans in activities of daily living.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"119 1","pages":"1132-1137"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86123080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Robots: Making Robots More Legible in Multi-Party Interactions 理解机器人:让机器人在多方互动中更清晰
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515485
Miguel Faria, Francisco S. Melo, A. Paiva
In this work we explore implicit communication between humans and robots—through movement—in multi-party (or multi-user) interactions. In particular, we investigate how a robot can move to better convey its intentions using legible movements in multi-party interactions. Current research on the application of legible movements has focused on single-user interactions, causing a vacuum of knowledge regarding the impact of such movements in multi-party interactions. We propose a novel approach that extends the notion of legible motion to multi-party settings, by considering that legibility depends on all human users involved in the interaction, and should take into consideration how each of them perceives the robot’s movements from their respective points-of-view. We show, through simulation and a user study, that our proposed model of multi-user legibility leads to movements that, on average, optimize the legibility of the motion as perceived by the group of users. Our model creates movements that allow each human to more quickly and confidently understand what are the robot’s intentions, thus creating safer, clearer and more efficient interactions and collaborations.
在这项工作中,我们通过多方(或多用户)交互探索人类和机器人之间的隐性交流。特别是,我们研究了机器人如何在多方交互中使用清晰的动作来更好地传达其意图。目前对易读动作应用的研究主要集中在单用户交互上,导致了关于这种动作在多方交互中的影响的知识真空。我们提出了一种新的方法,将易读运动的概念扩展到多方设置,考虑到易读性取决于参与交互的所有人类用户,并且应该考虑他们每个人如何从各自的角度感知机器人的运动。我们通过模拟和用户研究表明,我们提出的多用户易读性模型导致的运动,平均而言,优化了用户组感知到的运动的易读性。我们的模型创造的动作,让每个人都能更快、更自信地理解机器人的意图,从而创造更安全、更清晰、更有效的互动和合作。
{"title":"Understanding Robots: Making Robots More Legible in Multi-Party Interactions","authors":"Miguel Faria, Francisco S. Melo, A. Paiva","doi":"10.1109/RO-MAN50785.2021.9515485","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515485","url":null,"abstract":"In this work we explore implicit communication between humans and robots—through movement—in multi-party (or multi-user) interactions. In particular, we investigate how a robot can move to better convey its intentions using legible movements in multi-party interactions. Current research on the application of legible movements has focused on single-user interactions, causing a vacuum of knowledge regarding the impact of such movements in multi-party interactions. We propose a novel approach that extends the notion of legible motion to multi-party settings, by considering that legibility depends on all human users involved in the interaction, and should take into consideration how each of them perceives the robot’s movements from their respective points-of-view. We show, through simulation and a user study, that our proposed model of multi-user legibility leads to movements that, on average, optimize the legibility of the motion as perceived by the group of users. Our model creates movements that allow each human to more quickly and confidently understand what are the robot’s intentions, thus creating safer, clearer and more efficient interactions and collaborations.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"52 1","pages":"1031-1036"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90595610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1