首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Augmented Reality interface to verify Robot Learning 增强现实界面验证机器人学习
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223502
Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro
Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.
教授机器人新技能是人机协作(HRC)的一个重要方面。一个挑战是,机器人不能像人类一样交流反馈。这降低了人们对机器人的信任,因为在实际执行之前,很难判断机器人是否正确地学习了任务。在本文中,我们介绍了一个基于增强现实(AR)的可视化工具,它允许人类在执行之前验证所教的行为。我们的验证界面显示嵌入到真实环境中的虚拟仿真,及时地加上当前动作的语义描述。我们基于不同的界面/可视化技术组合开发了三种设计,以探索使用AR增强模拟比传统模拟环境(如RViz)的潜在优势。我们对18名参与者进行了一项用户研究,以评估所提出的可视化工具在错误检测能力方面的有效性。AR接口的优点之一是,它们提供了比传统模拟更真实的反馈,而且成本更低,无需对整个环境进行建模。
{"title":"Augmented Reality interface to verify Robot Learning","authors":"Maximilian Diehl, Alexander Plopski, H. Kato, Karinne Ramirez-Amaro","doi":"10.1109/RO-MAN47096.2020.9223502","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223502","url":null,"abstract":"Teaching robots new skills is considered as an important aspect of Human-Robot Collaboration (HRC). One challenge is that robots cannot communicate feedback in the same ways as humans do. This decreases the trust towards robots since it is difficult to judge, before the actual execution, if the robot has learned the task correctly. In this paper, we introduce an Augmented Reality (AR) based visualization tool that allows humans to verify the taught behavior before its execution. Our verification interface displays a virtual simulation embedded into the real environment, timely coupled with a semantic description of the current action. We developed three designs based on different interface/visualization-technology combinations to explore the potential benefits of enhanced simulations using AR over traditional simulation environments like RViz. We conducted a user study with 18 participants to assess the effectiveness of the proposed visualization tools regarding error detection capabilities. One of the advantages of the AR interfaces is that they provide more realistic feedback than traditional simulations with a lower cost of not having to model the entire environment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123806411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Increasing Engagement with Chameleon Robots in Bartending Services 在调酒服务中越来越多地使用变色龙机器人
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223488
Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi
As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.
随着服务机器人领域的迅速发展,人们期望这些机器人被赋予适当的能力,以社会可接受的方式与人类互动。这在客户关系中尤其重要,因为积极和情感的互动会对用户体验产生影响。在本文中,我们解决了一个问题,即酒吧机器人的特定行为风格,通过准语言和非语言行为,是否会影响用户的参与度和积极情绪的产生。为此,我们赋予了一个从人类顾客那里点酒的酒吧机器人,它具有移情行为风格。这旨在通过模仿对话伙伴的行为来触发对齐过程。这种行为风格与娱乐风格相比较,旨在与用户建立积极的关系,而中性风格则用于控制。结果表明,当参与者体验到更多积极情绪时,机器人被认为更安全,因此这表明,刺激与机器人建立积极和开放关系的互动可能对参与的情感维度产生积极影响。事实上,当移情机器人根据用户的行为调整其行为时,这种互动似乎比与中立机器人互动在提高公共服务环境中的参与度和积极情绪方面更有效。
{"title":"Increasing Engagement with Chameleon Robots in Bartending Services","authors":"Silvia Rossi, Elena Dell’Aquila, Davide Russo, Gianpaolo Maggi","doi":"10.1109/RO-MAN47096.2020.9223488","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223488","url":null,"abstract":"As the field of service robotics has been rapidly growing, it is expected for such robots to be endowed with the appropriate capabilities to interact with humans in a socially acceptable way. This is particularly relevant in the case of customer relationships where a positive and affective interaction has an impact on the users’ experience. In this paper, we address the question of whether a specific behavioral style of a barman-robot, acted through para-verbal and non-verbal behaviors, can affect users’ engagement and the creation of positive emotions. To that end, we endowed a barman-robot taking drink orders from human customers, with an empathic behavioral style. This aims at triggering to alignment process by mimicking the conversation partner’s behavior. This behavioral style is compared to an entertaining style, aiming at creating a positive relationship with the users, and a neutral style for control. Results suggest that when participants experienced more positive emotions, the robot was perceived as safer, so suggesting that interactions that stimulate positive and open relations with the robot may have a positive impact on the affective dimension of engagement. Indeed, when the empathic robot modulates its behavior according to the user’s one, this interaction seems to be more effective than when interacting with a neutral robot in improving engagement and positive emotions in public-service contexts.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124722714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Motion Trajectory Estimation of a Flying Object and Optimal Reduced Impact Catching by a Planar Manipulator* 基于平面机械臂的飞行器运动轨迹估计与最优减冲击捕捉*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223443
Min Set Paing, Enock William Nshama, N. Uchiyama
Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.
投掷和接球是人类的基本动作,可以应用于高级的人机协作任务。由于机器人的接住动作比投掷更困难,本研究针对平面机械臂对飞行物体的减少冲击接住进行了研究。利用卡尔曼滤波改进了目标轨迹的估计,并提出了最小二乘拟合来准确预测机械手的捕捉时间、位置和速度。为了减少碰撞捕获,提出了x和y方向上总冲击力的最小化作为优化问题。采用五次非周期b样条曲线实现关节空间的光滑连续轨迹。仿真和实验验证了所提方法的有效性。
{"title":"Motion Trajectory Estimation of a Flying Object and Optimal Reduced Impact Catching by a Planar Manipulator*","authors":"Min Set Paing, Enock William Nshama, N. Uchiyama","doi":"10.1109/RO-MAN47096.2020.9223443","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223443","url":null,"abstract":"Throwing and catching are fundamental motions for human beings, and may be applied for advanced human and robot collaborative tasks. Since catching motion is more difficult than throwing for a robot, this study deals with reduced impact catching of a flying object by a planar manipulator. The estimation of the object's trajectory is improved by the Kalman filter and the least squares fitting is proposed to accurately predict the catching time, position and velocity of the manipulator. To achieve reduced impact catching, the minimization of the total impact force in x and y-directions is proposed as an optimization problem. The fifth degree non-periodic B-spline curve is implemented to achieve smooth and continuous trajectories in the joint space. The effectiveness of the proposed approaches are demonstrated by simulation and experiment.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124934564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards An Affective Robot Companion for Audiology Rehabilitation: How Does Pepper Feel Today? 迈向听力学康复的情感机器人伴侣:胡椒今天感觉如何?
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223534
Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol
The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal "stress/emotion/motivation" recognition module for the robot to "understand" how the children feel, and behaviour and feedback module of the robot to show the children how the robot "feels". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.
这项工作的动机是开发一种用于听力学康复的情感机器人伴侣,并对失聪或重听儿童进行系统测试。开发了两个机器人模块,一个是多模态“压力/情绪/动机”识别模块,用于机器人“理解”儿童的感受,另一个是机器人的行为和反馈模块,用于向儿童展示机器人的“感受”。在本研究中,我们只关注机器人的行为和反馈模块。选择的情感/肯定行为通过平板游戏进行测试,并在听力学测试期间对机器人进行测试,作为反馈机制。面部数据与调查一起用于评估儿童对机器人的感知和行为集。
{"title":"Towards An Affective Robot Companion for Audiology Rehabilitation: How Does Pepper Feel Today?","authors":"Pinar Uluer, Hatice Kose, B. Oz, Turgut Can Aydinalev, D. Erol","doi":"10.1109/RO-MAN47096.2020.9223534","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223534","url":null,"abstract":"The motivation of this work is to develop an affective robot companion for audiology rehabilitation and to test the system with the deaf or hard of hearing children. Two robot modules are developed which are the multimodal \"stress/emotion/motivation\" recognition module for the robot to \"understand\" how the children feel, and behaviour and feedback module of the robot to show the children how the robot \"feels\". In this study we only focus on the behaviour and feedback module of the robot. The selected affective/affirmative behaviours are tested by means of tablet games and employed on the robot during an audiology test, as a feedback mechanism. Facial data are used together with the surveys to evaluate the children’s perception of the robot and the behaviour set.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125339827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Investigating Taste-liking with a Humanoid Robot Facilitator 用人形机器人辅助器调查味觉喜好
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223611
Zhuoni Jie, H. Gunes
Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.
品尝是我们日常生活中必不可少的活动。在餐饮服务行业实施社交机器人需要社交机器人能够理解顾客的非语言行为,包括口味喜好。人们在与类人社交机器人互动时,是否会改变与口味相关的行为反应,目前还知之甚少。我们进行了第一个饮料品尝研究促进者是一个人类和一个类人社交机器人有启动和非启动的教学风格。我们发现,引导者类型和引导者风格对认知口味喜好没有显著影响。然而,在机器人引导者的场景中,人们更愿意遵循指令,并且在启动的促进下感觉更舒服。我们的研究为酒店业使用类人社交机器人提供了新的实证发现和设计启示。
{"title":"Investigating Taste-liking with a Humanoid Robot Facilitator","authors":"Zhuoni Jie, H. Gunes","doi":"10.1109/RO-MAN47096.2020.9223611","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223611","url":null,"abstract":"Tasting is an essential activity in our daily lives. Implementing social robots in the food and drink service industry requires the social robots to be able to understand customers’ nonverbal behaviours, including taste-liking. Little is known about whether people alter their behavioural responses related to taste-liking when interacting with a humanoid social robot. We conducted the first beverage tasting study where the facilitator is a human versus a humanoid social robot with priming versus non-priming instruction styles. We found that the facilitator type and facilitation style had no significant influence on cognitive taste-liking. However, in robot facilitator scenarios, people were more willing to follow the instruction and felt more comfortable when facilitated with priming. Our study provides new empirical findings and design implications for using humanoid social robots in the hospitality industry.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129322947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving Efficiency and Safety in Teleoperated Robotic Manipulators using Motion Scaling and Force Feedback 利用运动缩放和力反馈提高遥操作机器人的效率和安全性
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223493
Yongmin Cho, Frank L. Hammond
Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.
最近全球建筑支出的激增推动了对更安全、更高效的建筑方法的需求。改进施工方法的一个潜在方法是提供用户界面,允许人类操作员以更直观和更有策略的方式控制机械。本文探讨了使用运动缩放和触觉反馈来提高任务完成速度和力控制在施工相关的遥操作机器人操作任务。在本研究中,我们设计了一个台式遥操作运动缩放机械臂(TMSRA)平台,该平台允许人类操作员控制主(触觉控制台)和从(机器人挖掘机)设备之间的运动映射速率,同时还提供力反馈和虚拟安全功能,以帮助防止从设备过度施力。我们通过实验评估了运动缩放和力反馈对人类用户执行模拟建筑任务的能力的影响。模拟机器人开挖和拆除任务的实验结果表明,由于力反馈和安全功能,有效埋地公用设施的最大受力分别降低了77.67%和76.36%。模拟负载推/滑任务的实验结果表明,提供用户控制的运动缩放可以提高任务效率,将完成时间减少至少31.41%,最多47.76%。
{"title":"Improving Efficiency and Safety in Teleoperated Robotic Manipulators using Motion Scaling and Force Feedback","authors":"Yongmin Cho, Frank L. Hammond","doi":"10.1109/RO-MAN47096.2020.9223493","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223493","url":null,"abstract":"Recent surges in global construction spending are driving the need for safer, more efficient construction methods. One potential way of improving construction methods is to provide user interfaces that allow human operators to control machinery in a more intuitive and strategic manner. This paper explores the use of motion scaling and haptic feedback to improve task completion speed and force control during construction-related teleoperated robotic manipulation tasks.In this study, we design a bench-top Teleoperated Motion Scaling Robotic Arm (TMSRA) platform that allows the human operator to control the motion-mapping rate between the master (haptic console) and slave (robotic excavator) devices, while also providing force feedback and virtual safety functions to help prevent excessive force application by the slave device. We experimentally evaluated the impact of motion scaling and force feedback on human users' ability to perform simulated construction tasks. Experimental results from simulated robotic excavation and demolition tasks show that the maximum force applied to fictive buried utilities was reduced by 77.67% and 76.36% respectively due to the force feedback and safety function. Experimental results from simulated payload pushing/sliding tasks demonstrate that the provision of user- controlled motion scaling increases task efficiency, reducing completion times by at least 31.41%, and as much as 47.76%.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130021758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Designing Context-Sensitive Norm Inverse Reinforcement Learning Framework for Norm-Compliant Autonomous Agents 符合规范的自主代理上下文敏感规范逆强化学习框架设计
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223344
Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara
Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.
人类的行为常常被社会规范所禁止或允许。因此,如果自主代理与人类互动,他们还需要推理各种法律规则,社会和伦理社会规范,这样他们才会被人类信任和接受。逆强化学习(IRL)可以通过专家演示来学习社会规范遵从行为。然而,规范是上下文敏感的,即不同的规范在不同的语境中被激活。例如,当家用机器人进入可能有人在场的浴室时,隐私规范会被激活,而当机器人进入厨房时,隐私规范不会被激活。在机器人的状态空间中表示各种上下文,以及在所有可能的任务和上下文下获得专家演示是极具挑战性的。受模块化规范型IRL (MNMDP)和上下文敏感型RL的启发,我们提出了一个新的IRL框架——上下文敏感型规范型IRL (CNIRL)。CNIRL分别对待状态和上下文,并假设专家确定环境中每个可能规范的优先级,其中每个规范都与不同的奖励函数相关联。代理选择行动以最大化其累积回报。我们提出了CNIRL模型,并证明了其计算复杂度在范数上是可扩展的。我们还通过两个实验场景展示了CNIRL可以处理变化上下文空间的问题。
{"title":"Designing Context-Sensitive Norm Inverse Reinforcement Learning Framework for Norm-Compliant Autonomous Agents","authors":"Yue (Sophie) Guo, Boshi Wang, Dana Hughes, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN47096.2020.9223344","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223344","url":null,"abstract":"Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128318210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Infant Kick Quality Detection to Support Physical Therapy and Early Detection of Cerebral Palsy: A Pilot Study 婴儿踢腿质量检测支持物理治疗和早期发现脑瘫:一项试点研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223571
Victor Emeli, Katelyn E. Fry, A. Howard
The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.
婴儿的踢腿模式可以提供可能预测其未来发展轨迹的标记。非典型踢腿模式可能预示发育障碍的可能性,如脑瘫(CP)。早期干预和物理治疗,鼓励练习适当的踢腿动作,可以帮助改善这些情况的结果。婴儿的踢腿动作通常由训练有素的健康专业人员进行评估,随后的物理治疗也由持牌专业人员进行。自动化的评估踢腿运动和管理的物理治疗是标准化这些过程是可取的。在这项工作中,我们试图开发一种方法来量化指标,可以提供洞察婴儿踢动作的质量。我们利用计算机视觉系统来分析亲子游戏和机器人婴儿移动所刺激的婴儿踢腿。我们利用统计技术来估计踢腿类型(同步和非同步)、踢腿幅度、踢腿频率和踢腿偏差。这些参数可以被证明有助于确定婴儿的踢腿质量,也可以衡量随着时间的推移物理治疗的改善。本文详细介绍了系统的设计,并对统计结果进行了讨论。
{"title":"Towards Infant Kick Quality Detection to Support Physical Therapy and Early Detection of Cerebral Palsy: A Pilot Study","authors":"Victor Emeli, Katelyn E. Fry, A. Howard","doi":"10.1109/RO-MAN47096.2020.9223571","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223571","url":null,"abstract":"The kicking patterns of infants can provide markers that may predict the trajectory of their future development. Atypical kicking patterns may predict the possibility of developmental disorders like Cerebral Palsy (CP). Early intervention and physical therapy that encourages the practice of proper kicking motions can help to improve the outcomes in these scenarios. The kicking motions of an infant are usually evaluated by a trained health professional and subsequent physical therapy is also conducted by a licensed professional. The automation of the evaluation of kicking motions and the administration of physical therapy is desirable for standardizing these processes. In this work, we attempt to develop a method to quantify metrics that can provide insight into the quality of baby kicking actions. We utilize a computer vision system to analyze infant kicking stimulated by parent-infant play and a robotic infant mobile. We utilize statistical techniques to estimate kick type (synchronous and non-synchronous), kick amplitude, kick frequency, and kick deviation. These parameters can prove helpful in determining an infant's kick quality and also measure improvements in physical therapy over time. In this paper, we detail the design of the system and discuss the statistical results.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117124975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development and Evaluation of Mixed Reality Co-eating System: Sharing the Behavior of Eating Food with a Robot Could Improve Our Dining Experience 混合现实共食系统的开发与评估:与机器人共享进食行为可以改善我们的用餐体验
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223518
Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba
Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.
和别人一起吃饭可以增强我们的用餐体验,提高社交能力,对健康也有好处。虽然很多人都不想一个人吃饭,但在日本独自吃饭的人越来越多,因为很难与他人匹配用餐时间和地点。在本文中,我们开发了一个混合现实(MR)系统,用于机器人的涂装。在这个系统中,机器人和MR耳机相连,用户可以观察到机器人把食物的图像放进嘴里,就像在吃东西一样。我们用至少13岁的用户做了一个实验来评估开发的系统。实验结果表明,与机器人只说话不吃饭相比,用户在机器人和他们一起吃饭时更享受他们的食物,感觉更美味。此外,当机器人吃东西时,他们吃得更多,这表明机器人可以影响人们的饮食行为。
{"title":"Development and Evaluation of Mixed Reality Co-eating System: Sharing the Behavior of Eating Food with a Robot Could Improve Our Dining Experience","authors":"Ayaka Fujii, Kanae Kochigami, Shingo Kitagawa, K. Okada, M. Inaba","doi":"10.1109/RO-MAN47096.2020.9223518","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223518","url":null,"abstract":"Eating with others enhances our dining experience, improves socialization, and has some health benefits. Although many people do not want to eat alone, there is an increase in the number of people who eat alone in Japan due to difficulty in matching mealtimes and places with others.In this paper, we develop a mixed reality (MR) system for coeating with a robot. In this system, a robot and a MR headset are connected enabling users to observe a robot putting food image into its mouth, as if eating. We conducted an experiment to evaluate the developed system with users that are at least 13 years old. Experimental results show that the users enjoyed their meal and felt more delicious when the robot ate with them than when the robot only talked without eating. Furthermore, they eat more when a robot eats, suggesting that a robot could influence people’s eating behavior.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122853400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Meet Your Personal Cobot, But Don’t Touch It Just Yet* 见见你的私人协作机器人,但现在不要碰它
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223573
Tudor B. Ionescu
This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.
本文报告了一个旨在将协作工业机器人引入创客空间(配备数字制造技术的公共机械车间)的研究项目。使用人种学方法,我们观察了研究人员和非专家之间的合作如何通过创客空间得到促进,机器人安全如何被项目参与者解释和谈判;以及关于协作机器人安全和应用的知识是如何在技术创造者之前无法预见的背景下产生的。分析表明,研究项目的社会技术配置类似于一个贸易区,跨学科项目团队的研究人员和创客空间成员之间交换各种类型的知识和专业知识。正如我们将论证的那样,贸易区模型可以用于分析和组织参与式人力资源调研。
{"title":"Meet Your Personal Cobot, But Don’t Touch It Just Yet*","authors":"Tudor B. Ionescu","doi":"10.1109/RO-MAN47096.2020.9223573","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223573","url":null,"abstract":"This paper reports on a research project aimed at introducing a collaborative industrial robot into a makerspace (a public machine shop equipped with digital manufacturing technologies). Using an ethnographic approach, we observed how collaborations between researchers and non-experts are facilitated by makerspaces, how robot safety is being construed and negotiated by the actors involved in the project; and how knowledge about collaborative robot safety and applications is produced in a context previously unforeseen by the creators of the technology. The proposed analysis suggests that the sociotechnical configuration of the studied project resembles that of a trading zone, in which various types of knowledge and expertise are exchanged between the researchers from the interdisciplinary project team and makerspace members. As we shall argue, the trading zone model can be useful in the analysis and organization of participatory HRI research.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126062306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1