首页 > 最新文献

2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)最新文献

英文 中文
Learning Task Constraints in Visual-Action Planning from Demonstrations 从演示中学习视觉行动计划中的任务约束
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515548
Francesco Esposito, Christian Pek, Michael C. Welle, D. Kragic
Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.
视觉规划方法在没有明确状态空间模型的决策任务中显示出巨大的成功。学习一个合适的表示和构建一个潜在的空间,在那里可以执行计划,允许非专业人员通过提供图像来设置和计划运动。然而,学习到的潜在空间通常是不可语义解释的,因此很难整合任务约束。我们提出了一个新的框架来确定计划是否满足约束,给出了满足或违反约束的政策的演示。该演示是线性时间逻辑公式的实现,该公式用于直接在潜在空间表示中训练长短期记忆(LSTM)网络。我们证明,我们的架构使设计人员能够轻松地指定、组合和集成任务约束,并在准确性方面实现高性能。此外,这种视觉规划框架使人类能够互动,应对人类工作人员可能涉及的环境变化。我们在具有不同任务约束的模拟仓库设置中展示了该方法在推箱任务上的灵活性。
{"title":"Learning Task Constraints in Visual-Action Planning from Demonstrations","authors":"Francesco Esposito, Christian Pek, Michael C. Welle, D. Kragic","doi":"10.1109/RO-MAN50785.2021.9515548","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515548","url":null,"abstract":"Visual planning approaches have shown great success for decision making tasks with no explicit model of the state space. Learning a suitable representation and constructing a latent space where planning can be performed allows non-experts to setup and plan motions by just providing images. However, learned latent spaces are usually not semantically-interpretable, and thus it is difficult to integrate task constraints. We propose a novel framework to determine whether plans satisfy constraints given demonstrations of policies that satisfy or violate the constraints. The demonstrations are realizations of Linear Temporal Logic formulas which are employed to train Long Short-Term Memory (LSTM) networks directly in the latent space representation. We demonstrate that our architecture enables designers to easily specify, compose and integrate task constraints and achieves high performance in terms of accuracy. Furthermore, this visual planning framework enables human interaction, coping the environment changes that a human worker may involve. We show the flexibility of the method on a box pushing task in a simulated warehouse setting with different task constraints.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"32 1","pages":"131-138"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86601238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human-Robot Trust Assessment Using Top-Down Visual Tracking After Robot Task Execution Mistakes 机器人任务执行错误后基于自上而下视觉跟踪的人机信任评估
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515501
Kasper Hald, M. Rehm, T. Moeslund
With increased interest in close-proximity human-robot collaboration in production settings it is important that we understand how robot behaviors and mistakes affect human-robot trust, as a lack of trust can cause loss in productivity and over-trust can lead to hazardous misuse. We designed a system for real-time human-robot trust assessment using a top-down depth camera tracking setup with the goal of using signs of physical apprehension to infer decreases in trust toward the robot. In an experiment with 20 participants we evaluated the tracking system in a repetitive collaborative pick-and-place task where the participant and the robot had to move a set of cones across a table. Midway through the tasks we disrupted the participants expectations by having the robot perform a trust-dampening action. Throughout the tasks we measured the participant’s preferred proximity and their trust toward the robot. Comparing irregular robot movements versus task execution mistakes as well simultaneous versus turn-taking collaboration, we found reported trust was significantly decreased when the robot performed an execution mistake going counter to the shared objective. This decrease was higher for participant working simultaneously as the robot. The effect of the trust-dampening actions on preferred proximity was inconclusive due to unexplained movement trends between tasks throughout the experiment. Despite being given the option to stop the robot in case of abnormal behavior, the trust-dampening actions did not increase the number of participant disruptions for the actions we tested.
随着人们对生产环境中近距离人机协作的兴趣日益浓厚,我们必须了解机器人的行为和错误如何影响人机信任,因为缺乏信任会导致生产力下降,过度信任会导致危险的误用。我们设计了一个实时人机信任评估系统,使用自上而下的深度相机跟踪设置,目标是使用物理恐惧的迹象来推断对机器人的信任减少。在一个有20名参与者的实验中,我们评估了跟踪系统在一个重复的协作拾取和放置任务中的作用,在这个任务中,参与者和机器人必须在桌子上移动一组锥体。在任务进行到一半时,我们让机器人做了一个削弱信任的动作,打乱了参与者的预期。在整个任务过程中,我们测量了参与者对机器人的偏好程度以及他们对机器人的信任程度。将机器人的不规则运动与任务执行错误以及同步与轮流协作进行比较,我们发现,当机器人执行与共同目标相反的执行错误时,报告的信任显着降低。与机器人同时工作的参与者,这种下降幅度更大。信任抑制行为对偏好接近的影响是不确定的,因为在整个实验中,任务之间的运动趋势无法解释。尽管在机器人出现异常行为的情况下,参与者可以选择停止机器人,但抑制信任的行为并没有增加参与者对我们测试的行为的干扰次数。
{"title":"Human-Robot Trust Assessment Using Top-Down Visual Tracking After Robot Task Execution Mistakes","authors":"Kasper Hald, M. Rehm, T. Moeslund","doi":"10.1109/RO-MAN50785.2021.9515501","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515501","url":null,"abstract":"With increased interest in close-proximity human-robot collaboration in production settings it is important that we understand how robot behaviors and mistakes affect human-robot trust, as a lack of trust can cause loss in productivity and over-trust can lead to hazardous misuse. We designed a system for real-time human-robot trust assessment using a top-down depth camera tracking setup with the goal of using signs of physical apprehension to infer decreases in trust toward the robot. In an experiment with 20 participants we evaluated the tracking system in a repetitive collaborative pick-and-place task where the participant and the robot had to move a set of cones across a table. Midway through the tasks we disrupted the participants expectations by having the robot perform a trust-dampening action. Throughout the tasks we measured the participant’s preferred proximity and their trust toward the robot. Comparing irregular robot movements versus task execution mistakes as well simultaneous versus turn-taking collaboration, we found reported trust was significantly decreased when the robot performed an execution mistake going counter to the shared objective. This decrease was higher for participant working simultaneously as the robot. The effect of the trust-dampening actions on preferred proximity was inconclusive due to unexplained movement trends between tasks throughout the experiment. Despite being given the option to stop the robot in case of abnormal behavior, the trust-dampening actions did not increase the number of participant disruptions for the actions we tested.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"22 1","pages":"892-898"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86216733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The CAR Approach: Creative Applied Research Experiences for Master’s Students in Autonomous Platooning CAR方法:自主队列驾驶硕士生的创造性应用研究经验
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515560
G. Sidorenko, Wojciech Mostowski, A. Vinel, J. Sjöberg, M. Cooney
Autonomous vehicles (AVs) are crucial robotic systems that promise to improve our lives via safe, efficient, and inclusive transport–while posing some new challenges for the education of future researchers in the area, that our current research and education might not be ready to deal with: In particular, we don’t know what the AVs of the future will look like, practical learning is restricted due to cost and safety concerns, and a high degree of multidisciplinary knowledge is required. Here, following the broad outline of Active Student Participation theory, we propose a pedagogical approach targeted toward AVs called CAR that combines Creativity theory, Applied demo-oriented learning, and Real world research context. Furthermore, we report on applying the approach to stimulate learning and engagement in a master’s course, in which students freely created a demo with 10 small robots running ROS2 and Ubuntu on Raspberry Pis, in connection to an ongoing research project and a real current problem (SafeSmart and COVID-19). The results suggested the feasibility of the CAR approach for enabling learning, as well as mutual benefits for both the students and researchers involved, and indicated some possibilities for future improvement, toward more effective integration of research experiences into second cycle courses.
自主车辆(AVs)机器人系统至关重要,通过安全承诺改善我们的生活,高效,和包容的transport-while带来一些新的挑战未来的教育研究人员在该地区,我们目前的研究和教育可能没准备好处理:特别是,我们不知道未来的AVs的样子,实际学习是限制由于成本和安全隐患,和高度的多学科知识是必需的。在这里,根据学生积极参与理论的大致框架,我们提出了一种针对自动驾驶汽车的教学方法,称为CAR,它结合了创造力理论、应用演示导向学习和现实世界的研究背景。此外,我们报告了在硕士课程中应用该方法来刺激学习和参与的情况,在该课程中,学生自由地创建了一个演示,其中有10个小型机器人在Raspberry Pis上运行ROS2和Ubuntu,与正在进行的研究项目和当前的实际问题(SafeSmart和COVID-19)有关。研究结果表明,CAR方法在促进学习方面是可行的,对学生和研究人员都有好处,并指出了未来改进的一些可能性,以便更有效地将研究经验整合到第二周期课程中。
{"title":"The CAR Approach: Creative Applied Research Experiences for Master’s Students in Autonomous Platooning","authors":"G. Sidorenko, Wojciech Mostowski, A. Vinel, J. Sjöberg, M. Cooney","doi":"10.1109/RO-MAN50785.2021.9515560","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515560","url":null,"abstract":"Autonomous vehicles (AVs) are crucial robotic systems that promise to improve our lives via safe, efficient, and inclusive transport–while posing some new challenges for the education of future researchers in the area, that our current research and education might not be ready to deal with: In particular, we don’t know what the AVs of the future will look like, practical learning is restricted due to cost and safety concerns, and a high degree of multidisciplinary knowledge is required. Here, following the broad outline of Active Student Participation theory, we propose a pedagogical approach targeted toward AVs called CAR that combines Creativity theory, Applied demo-oriented learning, and Real world research context. Furthermore, we report on applying the approach to stimulate learning and engagement in a master’s course, in which students freely created a demo with 10 small robots running ROS2 and Ubuntu on Raspberry Pis, in connection to an ongoing research project and a real current problem (SafeSmart and COVID-19). The results suggested the feasibility of the CAR approach for enabling learning, as well as mutual benefits for both the students and researchers involved, and indicated some possibilities for future improvement, toward more effective integration of research experiences into second cycle courses.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"150 1","pages":"214-221"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77405520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Influence of Robot's Unexpected Behavior on Individual Cognitive Performance 机器人意外行为对个体认知表现的影响
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515317
Youdi Li, E. Sato-Shimokawara, Toru Yamaguchi
Social robots have become pervasive in learning environments. The empirical understanding of how different individuals perceive and react to robot’s expressions has become an urgent necessity for the sustainable deployment. In this study, we examined whether robot’s unexpected actions affect individual cognitive performance. We have presented the experiment in which a robot could use unexpected visual or auditory stimuli and one’s reaction time in the Simon task was recorded for the investigation of the influence from the robot. Results have verified the idea that individual differences exist both in the perception of social robot’s expressions and the extent of change in the cognitive performance. This study provides insights into a richer application of human-robot interaction by taking individual differences regarding perception and response type into account, therefore constitutes a modest but significant step in the direction of adaptive human-robot interaction.
社交机器人在学习环境中已经无处不在。对不同个体如何感知和反应机器人表情的经验理解已经成为可持续部署的迫切需要。在本研究中,我们考察了机器人的意外行为是否会影响个体的认知表现。我们提出了一个机器人可以使用意想不到的视觉或听觉刺激的实验,并记录了一个人在西蒙任务中的反应时间,以调查机器人的影响。结果验证了个体对社交机器人表情的感知和认知表现的变化程度存在个体差异的观点。本研究通过考虑感知和反应类型的个体差异,为人机交互的更丰富应用提供了见解,因此构成了自适应人机交互方向的适度但重要的一步。
{"title":"The Influence of Robot's Unexpected Behavior on Individual Cognitive Performance","authors":"Youdi Li, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/RO-MAN50785.2021.9515317","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515317","url":null,"abstract":"Social robots have become pervasive in learning environments. The empirical understanding of how different individuals perceive and react to robot’s expressions has become an urgent necessity for the sustainable deployment. In this study, we examined whether robot’s unexpected actions affect individual cognitive performance. We have presented the experiment in which a robot could use unexpected visual or auditory stimuli and one’s reaction time in the Simon task was recorded for the investigation of the influence from the robot. Results have verified the idea that individual differences exist both in the perception of social robot’s expressions and the extent of change in the cognitive performance. This study provides insights into a richer application of human-robot interaction by taking individual differences regarding perception and response type into account, therefore constitutes a modest but significant step in the direction of adaptive human-robot interaction.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"1103-1109"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73313818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Design and Evaluation of an Affective, Continuum Robotic Appendage for Child-Robot Interaction 用于子-机器人交互的情感连续机器人附属物的设计与评价
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515546
Deanna Kocher, Juliette Bendheim, K. Green
We introduce a robotic appendage (a "fin") for a non-humanoid mobile robot that can communicate affect to child collaborators. Affective configurations were generated from a collection of cartoon images that featured characters with floppy or bunny ears. These images were classified according to the six Ekman emotions, analyzed to create ideal emotion configurations, and validated with a user study. From these configurations, we designed multiple continuum robot fin appendages and evaluated them based on (a) their ability to achieve the generated affect configurations, and (b) their durability for sustained use in child-robot interaction studies.
我们为一个非人形移动机器人引入了一个机器人附属物(一个“鳍”),它可以与儿童合作者交流情感。情感配置是从一组卡通图像中生成的,这些卡通图像的特征是松软的耳朵或兔子耳朵。这些图像根据六种Ekman情绪进行分类,分析以创建理想的情绪配置,并通过用户研究进行验证。从这些结构中,我们设计了多个连续体机器人鳍附件,并基于(a)它们实现生成的影响结构的能力,以及(b)它们在子机器人交互研究中持续使用的耐久性来评估它们。
{"title":"Design and Evaluation of an Affective, Continuum Robotic Appendage for Child-Robot Interaction","authors":"Deanna Kocher, Juliette Bendheim, K. Green","doi":"10.1109/RO-MAN50785.2021.9515546","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515546","url":null,"abstract":"We introduce a robotic appendage (a \"fin\") for a non-humanoid mobile robot that can communicate affect to child collaborators. Affective configurations were generated from a collection of cartoon images that featured characters with floppy or bunny ears. These images were classified according to the six Ekman emotions, analyzed to create ideal emotion configurations, and validated with a user study. From these configurations, we designed multiple continuum robot fin appendages and evaluated them based on (a) their ability to achieve the generated affect configurations, and (b) their durability for sustained use in child-robot interaction studies.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"59 1","pages":"586-591"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75643576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robot Facial Expression Framework for Enhancing Empathy in Human-Robot Interaction 人机交互中增强共情的机器人面部表情框架
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515533
Ung Park, Minso Kim, Youngeun Jang, GiJae Lee, Kanggeon Kim, Igil Kim, Jong-suk Choi
A social robot interacts with humans based on social intelligence, for which related applications are being developed across diverse fields to be increasingly integrated in modern society. In this regard, social intelligence and interaction are the keywords of a social robot. Social intelligence refers to the ability to control interactions or thoughts and feelings of relationships with other people; primal empathy, which is the ability to empathize by perceiving emotional signals, among the components of social intelligence was applied to the robot in this study. We proposed that the empathic ability of a social robot can be improved if the social robot can create facial expressions based on the emotional state of a user. Moreover, we suggested a framework of facial expressions for robots. These facial expressions can be repeatedly used in various social robot platforms to achieve such a strategy.
社交机器人基于社交智能与人类进行交互,其相关应用正在跨领域开发,并日益融入现代社会。在这方面,社交智能和互动是社交机器人的关键词。社交智力指的是控制与他人互动或思想和感情的能力;原始共情,即通过感知情感信号而产生共情的能力,是社会智能的组成部分之一,本研究将其应用于机器人。我们提出,如果社交机器人可以根据用户的情绪状态创造面部表情,则可以提高社交机器人的移情能力。此外,我们提出了一个机器人面部表情的框架。这些面部表情可以在各种社交机器人平台中重复使用,以实现这样的策略。
{"title":"Robot Facial Expression Framework for Enhancing Empathy in Human-Robot Interaction","authors":"Ung Park, Minso Kim, Youngeun Jang, GiJae Lee, Kanggeon Kim, Igil Kim, Jong-suk Choi","doi":"10.1109/RO-MAN50785.2021.9515533","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515533","url":null,"abstract":"A social robot interacts with humans based on social intelligence, for which related applications are being developed across diverse fields to be increasingly integrated in modern society. In this regard, social intelligence and interaction are the keywords of a social robot. Social intelligence refers to the ability to control interactions or thoughts and feelings of relationships with other people; primal empathy, which is the ability to empathize by perceiving emotional signals, among the components of social intelligence was applied to the robot in this study. We proposed that the empathic ability of a social robot can be improved if the social robot can create facial expressions based on the emotional state of a user. Moreover, we suggested a framework of facial expressions for robots. These facial expressions can be repeatedly used in various social robot platforms to achieve such a strategy.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"832-838"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73666430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Detecting Compensatory Motions and Providing Informative Feedback During a Tangible Robot Assisted Game for Post-Stroke Rehabilitation 在中风后康复的机器人辅助游戏中检测补偿运动并提供信息反馈
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515325
A. Ozgur, Hala Khodr, Barbara Bruno, Nicolas Gandar, M. Wessel, F. Hummel, P. Dillenbourg
Gamified rehabilitation tackles the problem of keeping patients engaged in, and motivated to do physical rehabilitation to improve its efficacy. However, with respect to standard rehabilitation, patients are freer to move about and may compensate their motion difficulties with parasite movements, which would greatly reduce the efficacy of the rehabilitation. To identify and characterize compensatory motions, we collected and analyzed video data of people playing the "tangible Pacman" game (an upper-limb rehabilitation game in which a patient moves a semi-passive robot, the "Pacman", on a map to collect 6 apples, while being chased by one or two autonomous robots, the "ghosts"). Participants include 10 healthy elderly adults and 10 chronic stroke patients, who played multiple runs of the game, with different sized maps and various game configurations. By analyzing the video recordings we successfully identified higher shoulder and torso lateral tilt compensation in stroke patients and developed a proof-of-concept compensatory motion detection system which relies on a wearable Inertial Measurement Unit and ROS to provide in-game, real-time visual feedback on compensation.
游戏化康复解决了保持患者参与的问题,并激励他们进行物理康复以提高其疗效。然而,相对于标准的康复,患者可以更自由地活动,并且可能通过寄生虫运动来补偿他们的运动困难,这将大大降低康复的效果。为了识别和描述补偿运动,我们收集并分析了人们玩“有形吃豆人”游戏(一种上肢康复游戏,患者在地图上移动半被动机器人“吃豆人”收集6个苹果,同时被一两个自主机器人“鬼”追赶)的视频数据。参与者包括10名健康的老年人和10名慢性中风患者,他们在不同大小的地图和不同的游戏配置下玩了多次游戏。通过分析视频记录,我们成功地确定了中风患者肩部和躯干侧向倾斜补偿,并开发了一种概念验证补偿运动检测系统,该系统依赖于可穿戴惯性测量单元和ROS提供游戏内实时视觉反馈。
{"title":"Detecting Compensatory Motions and Providing Informative Feedback During a Tangible Robot Assisted Game for Post-Stroke Rehabilitation","authors":"A. Ozgur, Hala Khodr, Barbara Bruno, Nicolas Gandar, M. Wessel, F. Hummel, P. Dillenbourg","doi":"10.1109/RO-MAN50785.2021.9515325","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515325","url":null,"abstract":"Gamified rehabilitation tackles the problem of keeping patients engaged in, and motivated to do physical rehabilitation to improve its efficacy. However, with respect to standard rehabilitation, patients are freer to move about and may compensate their motion difficulties with parasite movements, which would greatly reduce the efficacy of the rehabilitation. To identify and characterize compensatory motions, we collected and analyzed video data of people playing the \"tangible Pacman\" game (an upper-limb rehabilitation game in which a patient moves a semi-passive robot, the \"Pacman\", on a map to collect 6 apples, while being chased by one or two autonomous robots, the \"ghosts\"). Participants include 10 healthy elderly adults and 10 chronic stroke patients, who played multiple runs of the game, with different sized maps and various game configurations. By analyzing the video recordings we successfully identified higher shoulder and torso lateral tilt compensation in stroke patients and developed a proof-of-concept compensatory motion detection system which relies on a wearable Inertial Measurement Unit and ROS to provide in-game, real-time visual feedback on compensation.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"45 1","pages":"243-249"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82471947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Human-Aware Robot Navigation Based on Learned Cost Values from User Studies 基于用户学习成本值的人类感知机器人导航
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515481
K. Bungert, Lilli Bruckschen, S. Krumpen, Witali Rau, Michael Weinmann, Maren Bennewitz
In this paper, we present a new approach to human-aware robot navigation, which extends our previous proximity-based navigation framework [1] by introducing visibility and predictability as new parameters. We derived these parameters from a user study and incorporated them into a cost function, which models the user’s discomfort with respect to a relative robot position based on proximity, visibility, predictability, and work efficiency. We use this cost function in combination with an A* planner to create a user-preferred robot navigation policy. In comparison to our previous framework, our new cost function results in a 6% increase in social distance compliance, a 6.3% decrease in visibility of the robot as preferred, and an average decrease of orientation changes of 12.6° per meter resulting in better predictability, while maintaining a comparable average path length. We further performed a virtual reality experiment to evaluate the user comfort based on direct human feedback, finding that the participants on average felt comfortable to very comfortable with the resulting robot trajectories from our approach.
在本文中,我们提出了一种人类感知机器人导航的新方法,该方法通过引入可见性和可预测性作为新参数,扩展了我们之前基于接近度的导航框架[1]。我们从用户研究中得出这些参数,并将其纳入成本函数,该函数基于距离、可见性、可预测性和工作效率,对用户相对于机器人位置的不舒适感进行建模。我们将此成本函数与A*计划器结合使用,以创建用户首选的机器人导航策略。与我们之前的框架相比,我们的新成本函数导致社交距离依从性增加6%,机器人的可见度降低6.3%,并且每米方向变化平均减少12.6°,从而获得更好的可预测性,同时保持可比的平均路径长度。我们进一步进行了一个虚拟现实实验,根据直接的人类反馈来评估用户的舒适度,发现参与者对我们的方法产生的机器人轨迹平均感到舒适到非常舒适。
{"title":"Human-Aware Robot Navigation Based on Learned Cost Values from User Studies","authors":"K. Bungert, Lilli Bruckschen, S. Krumpen, Witali Rau, Michael Weinmann, Maren Bennewitz","doi":"10.1109/RO-MAN50785.2021.9515481","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515481","url":null,"abstract":"In this paper, we present a new approach to human-aware robot navigation, which extends our previous proximity-based navigation framework [1] by introducing visibility and predictability as new parameters. We derived these parameters from a user study and incorporated them into a cost function, which models the user’s discomfort with respect to a relative robot position based on proximity, visibility, predictability, and work efficiency. We use this cost function in combination with an A* planner to create a user-preferred robot navigation policy. In comparison to our previous framework, our new cost function results in a 6% increase in social distance compliance, a 6.3% decrease in visibility of the robot as preferred, and an average decrease of orientation changes of 12.6° per meter resulting in better predictability, while maintaining a comparable average path length. We further performed a virtual reality experiment to evaluate the user comfort based on direct human feedback, finding that the participants on average felt comfortable to very comfortable with the resulting robot trajectories from our approach.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"29 1","pages":"337-342"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73466520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dropping Sensation for Development of Lower Limb Force Feedback Device 下肢力反馈装置的跌落感开发
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515498
T. Masuda, T. Tanaka, Ryunosuke Sawahashi, M. Okui, Rie Nishihama, T. Nakamura
In this study, we evaluate the dropping sensation for the development of a wearable lower limb force feedback device that can render both dropping and walking sensations. The developed device can render the dropping sensation at a smaller height than in reality by decelerating and stopping descent during the rendering of the drop image. Considering the user will be walking with the device, a smaller device height leads to better safety. The purpose of this study is to clarify the required specifications of the height of the vertical range of motion of the platform part, and the feasibility of the concept of rendering the dropping sensation. For this purpose, the dropping sensation for the difference in human acceleration time and human deceleration acceleration was evaluated. The results showed that the rendering of the dropping sensation required more than 0.41 s of descent at an acceleration of approximately 1377 mm/s2. Moreover, the dropping sensation and sense of reality were not impaired, even when the platform part of the foot was decelerated. This result indicates that the device can be made smaller.
在这项研究中,我们评估了一种可穿戴的下肢力反馈装置的跌落感觉,该装置可以呈现跌落和行走的感觉。该装置在绘制落差图像时,通过减速和停止下降,可以在比实际更小的高度上呈现落差感。考虑到用户将带着设备行走,较小的设备高度会带来更好的安全性。本研究的目的是明确平台部分垂直运动范围的高度要求规格,以及呈现跌落感概念的可行性。为此,对人体加速时间和人体减速加速度的差异所产生的跌落感进行了评价。结果表明,在大约1377 mm/s2的加速度下,下降感觉的呈现需要超过0.41 s的下降时间。此外,即使脚的平台部分减速,跌落感和现实感也没有受损。这一结果表明,该装置可以做得更小。
{"title":"Dropping Sensation for Development of Lower Limb Force Feedback Device","authors":"T. Masuda, T. Tanaka, Ryunosuke Sawahashi, M. Okui, Rie Nishihama, T. Nakamura","doi":"10.1109/RO-MAN50785.2021.9515498","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515498","url":null,"abstract":"In this study, we evaluate the dropping sensation for the development of a wearable lower limb force feedback device that can render both dropping and walking sensations. The developed device can render the dropping sensation at a smaller height than in reality by decelerating and stopping descent during the rendering of the drop image. Considering the user will be walking with the device, a smaller device height leads to better safety. The purpose of this study is to clarify the required specifications of the height of the vertical range of motion of the platform part, and the feasibility of the concept of rendering the dropping sensation. For this purpose, the dropping sensation for the difference in human acceleration time and human deceleration acceleration was evaluated. The results showed that the rendering of the dropping sensation required more than 0.41 s of descent at an acceleration of approximately 1377 mm/s2. Moreover, the dropping sensation and sense of reality were not impaired, even when the platform part of the foot was decelerated. This result indicates that the device can be made smaller.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"398-405"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86668494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
"The robot may not notice my discomfort" – Examining the Experience of Vulnerability for Trust in Human-Robot Interaction “机器人可能不会注意到我的不适”——考察人机交互中信任的脆弱性体验
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515513
Glenda Hannibal, A. Weiss, V. Charisi
Ensuring trust in human-robot interaction (HRI) is considered essential for widespread use of robots in society and everyday life. While the majority of studies use game-based and high-risk scenarios with low familiarity to gain a deeper understanding of human trust in robots, scenarios with more subtle trust violations that could happen in everyday life situations are less often considered. In this paper, we present a theory-driven approach to studying the situated trust in HRI by focusing on the experience of vulnerability. Focusing on vulnerability not only challenges previous work on trust in HRI from a theoretical perspective, but is also useful for guiding empirical investigations. As a first proof-of-concept study, we conducted an interactive online survey that demonstrates that it is possible to measure human experience of vulnerability in the ordinary, mundane, and familiar situation of clothes shopping. We conclude that the inclusion of subtle trust violation scenarios occurring in the everyday life situation of clothes shopping enables a better understanding of situated trust in HRI, which is of special importance when considering more near-future applications of robots.
确保对人机交互(HRI)的信任被认为是机器人在社会和日常生活中广泛使用的必要条件。虽然大多数研究使用基于游戏和低熟悉度的高风险场景来更深入地了解人类对机器人的信任,但在日常生活中可能发生的更微妙的信任违反场景很少被考虑。在本文中,我们提出了一种理论驱动的方法,通过关注脆弱性的经验来研究HRI中的情境信任。对脆弱性的关注不仅从理论角度挑战了以往HRI信任研究的成果,也有助于指导实证研究。作为第一个概念验证研究,我们进行了一项交互式在线调查,该调查表明,在普通、平凡和熟悉的服装购物环境中,测量人类对脆弱性的体验是可能的。我们得出的结论是,在日常生活中购买衣服的情况下,包含微妙的信任违反场景可以更好地理解HRI中的情境信任,这在考虑机器人不久的将来的应用时尤为重要。
{"title":"\"The robot may not notice my discomfort\" – Examining the Experience of Vulnerability for Trust in Human-Robot Interaction","authors":"Glenda Hannibal, A. Weiss, V. Charisi","doi":"10.1109/RO-MAN50785.2021.9515513","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515513","url":null,"abstract":"Ensuring trust in human-robot interaction (HRI) is considered essential for widespread use of robots in society and everyday life. While the majority of studies use game-based and high-risk scenarios with low familiarity to gain a deeper understanding of human trust in robots, scenarios with more subtle trust violations that could happen in everyday life situations are less often considered. In this paper, we present a theory-driven approach to studying the situated trust in HRI by focusing on the experience of vulnerability. Focusing on vulnerability not only challenges previous work on trust in HRI from a theoretical perspective, but is also useful for guiding empirical investigations. As a first proof-of-concept study, we conducted an interactive online survey that demonstrates that it is possible to measure human experience of vulnerability in the ordinary, mundane, and familiar situation of clothes shopping. We conclude that the inclusion of subtle trust violation scenarios occurring in the everyday life situation of clothes shopping enables a better understanding of situated trust in HRI, which is of special importance when considering more near-future applications of robots.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"118 1","pages":"704-711"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87686925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1