首页 > 最新文献

2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)最新文献

英文 中文
Evaluating Distances in Tactile Human-Drone Interaction 评估触觉人机交互距离
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515313
M. Lieser, Ulrich Schwanecke, J. Berdux
The increasing autonomy and presence of Unmanned Aerial Vehicles (UAVs), especially quadrotors, in everyday applications requires in-depth studies of proxemics in Human-Drone Interaction (HDI) and novel methods of user interaction suitable for different distances. This paper presents a user study (N=32) that evaluates proxemics with a miniature quadrotor (92 mm wheelbase) from four directions (front, back, left, right) in a seated setting investigating preferred approach directions and distances in future home or workplace scenarios. The goal of this study is to determine if humans are willing to allow flying robots of that size and mechanical appearance to approach close enough to enable tactile interaction. Moreover, the participants' inclination to physically interact with the quadrotor is examined. Studies evaluating proxemics in HDI are highly dependent on repeatable results and actually flying robots. In most comparable studies, the quadrotors used did not fly freely or at all, but were moved, manually controlled, or flew barely repeatable trajectories due to unstable onboard navigation. Only few studies have used pose estimation systems that ensure smooth and reproducible trajectories and thus reliable findings of the studies. For this reason, in addition to the presented study and its results, an insight into the used testbed is provided, that also integrates full skeleton pose estimation rather than tracking participants with only a single marker.
无人驾驶飞行器(uav),特别是四旋翼飞行器,在日常应用中的自主性和存在性日益增加,需要深入研究人机交互(HDI)中的近距离学和适合不同距离的用户交互新方法。本文提出了一项用户研究(N=32),该研究评估了微型四旋翼(92毫米轴距)在座位设置下从四个方向(前、后、左、右)的接近行为,调查了未来家庭或工作场所场景中首选的接近方向和距离。这项研究的目的是确定人类是否愿意允许这种尺寸和机械外观的飞行机器人接近足够近的距离,以实现触觉互动。此外,参与者的倾向,物理互动与四旋翼进行了检查。评估HDI中的接近性的研究高度依赖于可重复的结果和实际飞行的机器人。在大多数类似的研究中,所使用的四旋翼机不能自由飞行或根本不能飞行,而是被移动,手动控制,或者由于机载导航不稳定而几乎无法重复飞行轨迹。只有少数研究使用姿态估计系统,以确保平滑和可重复的轨迹,从而可靠的研究结果。出于这个原因,除了提出的研究及其结果之外,还提供了对使用的测试平台的洞察,该测试平台还集成了完整的骨骼姿势估计,而不是仅使用单个标记跟踪参与者。
{"title":"Evaluating Distances in Tactile Human-Drone Interaction","authors":"M. Lieser, Ulrich Schwanecke, J. Berdux","doi":"10.1109/RO-MAN50785.2021.9515313","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515313","url":null,"abstract":"The increasing autonomy and presence of Unmanned Aerial Vehicles (UAVs), especially quadrotors, in everyday applications requires in-depth studies of proxemics in Human-Drone Interaction (HDI) and novel methods of user interaction suitable for different distances. This paper presents a user study (N=32) that evaluates proxemics with a miniature quadrotor (92 mm wheelbase) from four directions (front, back, left, right) in a seated setting investigating preferred approach directions and distances in future home or workplace scenarios. The goal of this study is to determine if humans are willing to allow flying robots of that size and mechanical appearance to approach close enough to enable tactile interaction. Moreover, the participants' inclination to physically interact with the quadrotor is examined. Studies evaluating proxemics in HDI are highly dependent on repeatable results and actually flying robots. In most comparable studies, the quadrotors used did not fly freely or at all, but were moved, manually controlled, or flew barely repeatable trajectories due to unstable onboard navigation. Only few studies have used pose estimation systems that ensure smooth and reproducible trajectories and thus reliable findings of the studies. For this reason, in addition to the presented study and its results, an insight into the used testbed is provided, that also integrates full skeleton pose estimation rather than tracking participants with only a single marker.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"104 1","pages":"1275-1282"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78377978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unfair! Perceptions of Fairness in Human-Robot Teams 不公平!人-机器人团队的公平感
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515428
M. L. Chang, J. Trafton, J. McCurry, A. Thomaz
How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.
如何对待团队成员会影响他们在团队中的表现以及他们未来成为团队一员的愿望。先前对人机团队合作的研究提出了基于每个团队成员完成的工作来定义人机团队的公平性。然而,正确捕捉人们对人-机器人团队公平感的指标仍然是一个研究空白。我们目前的工作是评估客观指标如何很好地捕捉人们对公平的看法。首先,我们将基于团队成员能力和工作量的先前公平性指标扩展到更大的团队。我们还开发了一个新的指标来量化机器人与每个人在同一项任务上花费的时间。我们进行了一项在线用户研究(n=95),并表明这些指标与感知公平性一致。重要的是,我们发现人们对公平的评估存在溢出效应。当被要求根据机器人与每个人一起工作的时间来评价公平时,参与者使用了两个因素(基于机器人的时间和队友的能力)。当人们被要求根据能力来评估公平时,这种溢出效应会更强。根据这些见解,我们提出了算法的设计准则,使机器人队友在决策时考虑公平性,以保持积极的团队社会动力和团队任务绩效。
{"title":"Unfair! Perceptions of Fairness in Human-Robot Teams","authors":"M. L. Chang, J. Trafton, J. McCurry, A. Thomaz","doi":"10.1109/RO-MAN50785.2021.9515428","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515428","url":null,"abstract":"How team members are treated influences their performance in the team and their desire to be a part of the team in the future. Prior research in human-robot teamwork proposes fairness definitions for human-robot teaming that are based on the work completed by each team member. However, metrics that properly capture people’s perception of fairness in human-robot teaming remains a research gap. We present work on assessing how well objective metrics capture people’s perception of fairness. First, we extend prior fairness metrics based on team members’ capabilities and workload to a bigger team. We also develop a new metric to quantify the amount of time that the robot spends working on the same task as each person. We conduct an online user study (n=95) and show that these metrics align with perceived fairness. Importantly, we discover that there are bleed-over effects in people’s assessment of fairness. When asked to rate fairness based on the amount of time that the robot spends working with each person, participants used two factors (fairness based on the robot’s time and teammates’ capabilities). This bleed-over effect is stronger when people are asked to assess fairness based on capability. From these insights, we propose design guidelines for algorithms to enable robotic teammates to consider fairness in its decision-making to maintain positive team social dynamics and team task performance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"905-912"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78316036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
To Move or Not to Move? Social Acceptability of Robot Proxemics Behavior Depending on User Emotion 动还是不动?基于用户情感的机器人近身行为的社会可接受性
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515502
Björn Petrak, J. Stapels, Katharina Weitz, F. Eyssel, E. André
Various works show that proxemics occupies an important role in human-robot interaction and that appropriate proxemic interaction depends on many characteristics of humans and robots. However, there is none that shows the relationship between an emotional state expressed by a user and a proxemic reaction of the robot to it, in a social interaction between these interactants. In the current experiment (N = 82), we investigate this using an online study in which we examine which proxemic response (i.e., approaching, not moving, moving away) to a person’s expressed emotional state (i.e., anger, fear, disgust, surprise, sadness, joy) is perceived as appropriate. The quantitative and qualitative data collected suggests that the robot’s approach was considered appropriate for the expressed fear, sadness, and joy, whereas moving away was perceived as inappropriate in most scenarios. Further exploratory findings underline the importance of appropriate nonverbal behavior on the perception of the robot.
各种研究表明,近距离互动在人机交互中起着重要作用,适当的近距离互动取决于人和机器人的许多特征。然而,在这些互动者之间的社交互动中,没有一个显示用户表达的情感状态和机器人对它的邻近反应之间的关系。在当前的实验(N = 82)中,我们使用一项在线研究来调查这一点,在这项研究中,我们检查了对一个人表达的情绪状态(即愤怒、恐惧、厌恶、惊讶、悲伤、喜悦),哪种近似反应(即接近、不移动、远离)被认为是合适的。收集的定量和定性数据表明,机器人的方法被认为是适合表达恐惧、悲伤和快乐的,而在大多数情况下,离开被认为是不合适的。进一步的探索性发现强调了适当的非语言行为对机器人感知的重要性。
{"title":"To Move or Not to Move? Social Acceptability of Robot Proxemics Behavior Depending on User Emotion","authors":"Björn Petrak, J. Stapels, Katharina Weitz, F. Eyssel, E. André","doi":"10.1109/RO-MAN50785.2021.9515502","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515502","url":null,"abstract":"Various works show that proxemics occupies an important role in human-robot interaction and that appropriate proxemic interaction depends on many characteristics of humans and robots. However, there is none that shows the relationship between an emotional state expressed by a user and a proxemic reaction of the robot to it, in a social interaction between these interactants. In the current experiment (N = 82), we investigate this using an online study in which we examine which proxemic response (i.e., approaching, not moving, moving away) to a person’s expressed emotional state (i.e., anger, fear, disgust, surprise, sadness, joy) is perceived as appropriate. The quantitative and qualitative data collected suggests that the robot’s approach was considered appropriate for the expressed fear, sadness, and joy, whereas moving away was perceived as inappropriate in most scenarios. Further exploratory findings underline the importance of appropriate nonverbal behavior on the perception of the robot.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"105 1","pages":"975-982"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74978598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Engagement Intention Estimation in Multiparty Human-Robot Interaction 人机交互中的参与性意向估计
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515373
Zhijie Zhang, Jianmin Zheng, N. Magnenat-Thalmann
As the applications of intelligent agents (IAs) are gradually increasing in daily life, they are expected to have reasonable social intelligence to interact with people by appropriately interpreting human behavior and intention. This paper presents a method to estimate whether people have willingness to join in a conversation, which helps to endow IAs with the capability of detecting potential participants. The method is built on the CNN-LSTM network, which takes image features and social signals as input, making use of general information conveyed in images, semantic social cues proven by social psychology studies, and temporal information in the sequence of inputs. The network is designed to have a multi-branch structure with the flexibility of accommodating different types of inputs. We also discuss the signal transition in multiparty human-robot interaction scenarios. The method is evaluated on three datasets with social signals and/or images as inputs. The results show that the proposed method can infer human engagement intention well.
随着智能体在日常生活中的应用逐渐增多,人们期望智能体具有合理的社会智能,通过适当地解释人类的行为和意图与人进行互动。本文提出了一种估计人们是否愿意加入对话的方法,这有助于赋予ai检测潜在参与者的能力。该方法建立在CNN-LSTM网络的基础上,以图像特征和社会信号为输入,利用图像传递的一般信息、社会心理学研究证明的语义社会线索以及输入序列中的时间信息。该网络被设计成具有多分支结构,具有适应不同类型输入的灵活性。我们还讨论了多方人机交互场景下的信号转换。该方法在以社会信号和/或图像作为输入的三个数据集上进行评估。结果表明,该方法能较好地推断人的参与意图。
{"title":"Engagement Intention Estimation in Multiparty Human-Robot Interaction","authors":"Zhijie Zhang, Jianmin Zheng, N. Magnenat-Thalmann","doi":"10.1109/RO-MAN50785.2021.9515373","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515373","url":null,"abstract":"As the applications of intelligent agents (IAs) are gradually increasing in daily life, they are expected to have reasonable social intelligence to interact with people by appropriately interpreting human behavior and intention. This paper presents a method to estimate whether people have willingness to join in a conversation, which helps to endow IAs with the capability of detecting potential participants. The method is built on the CNN-LSTM network, which takes image features and social signals as input, making use of general information conveyed in images, semantic social cues proven by social psychology studies, and temporal information in the sequence of inputs. The network is designed to have a multi-branch structure with the flexibility of accommodating different types of inputs. We also discuss the signal transition in multiparty human-robot interaction scenarios. The method is evaluated on three datasets with social signals and/or images as inputs. The results show that the proposed method can infer human engagement intention well.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"117-122"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74750012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Affective Storytelling with an Embodied Agent 用具身代理探索情感故事
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515323
R. Gomez, Deborah Szapiro, Kerl Galindo, L. Merino, H. Brock, Keisuke Nakamura, Yu Fang, Eric Nichols
In this paper, we explore the storytelling potential of a robot. We exploit the use of creative contents that maximize the embodied communication affordance of the empathic robot Haru. We identify the elements in storytelling such as narration, agency, engagement and education and synthesized these into the robot. Through effective design we investigated the possible answers that could leverage the limitations and the challenges in developing storytelling applications through a robotic medium. Our preliminary findings show that the use of an embodied agent such as a robot in storytelling only has meaning when its communicative affordance (i.e. embodiment, expressiveness, and other modalities) is tapped, adding new dimension to the experience. Otherwise, traditional storytelling delivery (e.g. tablet) without the use of embodiment will suffice. Hence, robots need to be performers rather than just mere props in storytelling.
在这篇论文中,我们探索了机器人讲故事的潜力。我们利用创造性的内容,最大限度地发挥移情机器人Haru的具体沟通能力。我们确定了讲故事的元素,如叙述、代理、参与和教育,并将这些元素合成到机器人中。通过有效的设计,我们研究了可能的答案,可以利用机器人媒介开发故事叙述应用程序的局限性和挑战。我们的初步研究结果表明,在讲故事中使用具身代理(如机器人),只有当它的交际能力(即具身性、表现力和其他形式)被挖掘出来,为体验增加新的维度时,才有意义。否则,不使用化身的传统讲故事方式(如平板电脑)就足够了。因此,机器人需要成为表演者,而不仅仅是讲故事的道具。
{"title":"Exploring Affective Storytelling with an Embodied Agent","authors":"R. Gomez, Deborah Szapiro, Kerl Galindo, L. Merino, H. Brock, Keisuke Nakamura, Yu Fang, Eric Nichols","doi":"10.1109/RO-MAN50785.2021.9515323","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515323","url":null,"abstract":"In this paper, we explore the storytelling potential of a robot. We exploit the use of creative contents that maximize the embodied communication affordance of the empathic robot Haru. We identify the elements in storytelling such as narration, agency, engagement and education and synthesized these into the robot. Through effective design we investigated the possible answers that could leverage the limitations and the challenges in developing storytelling applications through a robotic medium. Our preliminary findings show that the use of an embodied agent such as a robot in storytelling only has meaning when its communicative affordance (i.e. embodiment, expressiveness, and other modalities) is tapped, adding new dimension to the experience. Otherwise, traditional storytelling delivery (e.g. tablet) without the use of embodiment will suffice. Hence, robots need to be performers rather than just mere props in storytelling.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"1249-1255"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84514451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Effect of Polite Triggers in Chatbot Conversations on User Experience across Gender, Age, and Personality 聊天机器人会话中礼貌触发因素对不同性别、年龄和个性用户体验的影响
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515528
Kanishk Rana, Rahul Madaan, Jainendra Shukla
Chatbots are one of the emerging intelligent systems which interact with customers to solve different queries in a wide range of domain areas. During social interaction, politeness plays a vital role in achieving effective communication. Consequently, it becomes essential to understand how a chatbot’s politeness affects user experience during the interaction. To understand it, we conducted a between-subject user study with two chatbots where one of the chatbots employs polite triggers, and the other one replies intending to answer the queries. To introduce politeness in normal chatbot responses, we used the state-of-the-art tag and generate approach. We first analyzed how different personality traits influence the response of individual persons to polite triggers. In addition, we also investigated the effects of polite triggers among different genders and age groups using a cross-sectional analysis.
聊天机器人是新兴的智能系统之一,它与客户进行交互,以解决各种领域的不同问题。在社会交往中,礼貌是实现有效沟通的重要因素。因此,了解聊天机器人的礼貌如何在交互过程中影响用户体验变得至关重要。为了理解这一点,我们对两个聊天机器人进行了一个主题之间的用户研究,其中一个聊天机器人使用礼貌的触发器,另一个回复意图回答问题。为了在正常的聊天机器人反应中引入礼貌,我们使用了最先进的标签和生成方法。我们首先分析了不同的人格特征如何影响个体对礼貌诱因的反应。此外,我们还使用横断面分析调查了礼貌触发在不同性别和年龄组中的影响。
{"title":"Effect of Polite Triggers in Chatbot Conversations on User Experience across Gender, Age, and Personality","authors":"Kanishk Rana, Rahul Madaan, Jainendra Shukla","doi":"10.1109/RO-MAN50785.2021.9515528","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515528","url":null,"abstract":"Chatbots are one of the emerging intelligent systems which interact with customers to solve different queries in a wide range of domain areas. During social interaction, politeness plays a vital role in achieving effective communication. Consequently, it becomes essential to understand how a chatbot’s politeness affects user experience during the interaction. To understand it, we conducted a between-subject user study with two chatbots where one of the chatbots employs polite triggers, and the other one replies intending to answer the queries. To introduce politeness in normal chatbot responses, we used the state-of-the-art tag and generate approach. We first analyzed how different personality traits influence the response of individual persons to polite triggers. In addition, we also investigated the effects of polite triggers among different genders and age groups using a cross-sectional analysis.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"15 1","pages":"813-819"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84540176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the l1 Optimal State Estimator with Applications to Bipedal Robots 双足机器人l1最优状态估计器的应用
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515320
H. Park, Jung Hoon Kim
Motivated by the fact that a number of present state estimations require some presumed conditions and could not lead to a desired accuracy when they are applied to real systems, this paper is concerned with providing a new framework for the state estimation. We first introduce some existing methods of state estimations and describe their weaknesses for unknown bounded persistent elements. Aiming at taking into account more practical situations of real systems, which cannot be treated by the existing methods, this paper provides a new state estimation method by using the l1 optimal control theory. More precisely, the new state estimation method called the l1 optimal state estimation considers unknown bounded persistent elements such as external disturbances and measurement noises, which often occur in the systems and make the estimation difficult. The problem of minimizing the effect of the bounded persistent elements on the corresponding state estimation error could be mathematically formulated by using the arguments on l1 optimal state estimation introduced in this paper. Finally, the effectiveness of the l1 optimal state estimation is demonstrated through some simulation results associated with the center of mass (CoM) estimation for a bipedal robot on its linear inverted pendulum model (LIPM).
摘要针对现有的一些状态估计需要一定的假定条件,应用于实际系统时不能达到预期的精度的问题,提出了一种新的状态估计框架。首先介绍了现有的一些状态估计方法,并描述了它们对未知有界持久元素状态估计的不足。针对现有方法无法处理的真实系统的更实际情况,本文提出了一种利用l1最优控制理论的状态估计新方法。更准确地说,这种新的状态估计方法被称为l1最优状态估计,它考虑了系统中经常出现的未知有界持久因素,如外部干扰和测量噪声,使估计变得困难。利用本文引入的关于l1最优状态估计的论证,可以用数学形式表达有界持久元素对相应状态估计误差影响的最小化问题。最后,通过双足机器人在线性倒立摆模型(LIPM)上的质心估计的仿真结果,验证了l1最优状态估计的有效性。
{"title":"On the l1 Optimal State Estimator with Applications to Bipedal Robots","authors":"H. Park, Jung Hoon Kim","doi":"10.1109/RO-MAN50785.2021.9515320","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515320","url":null,"abstract":"Motivated by the fact that a number of present state estimations require some presumed conditions and could not lead to a desired accuracy when they are applied to real systems, this paper is concerned with providing a new framework for the state estimation. We first introduce some existing methods of state estimations and describe their weaknesses for unknown bounded persistent elements. Aiming at taking into account more practical situations of real systems, which cannot be treated by the existing methods, this paper provides a new state estimation method by using the l1 optimal control theory. More precisely, the new state estimation method called the l1 optimal state estimation considers unknown bounded persistent elements such as external disturbances and measurement noises, which often occur in the systems and make the estimation difficult. The problem of minimizing the effect of the bounded persistent elements on the corresponding state estimation error could be mathematically formulated by using the arguments on l1 optimal state estimation introduced in this paper. Finally, the effectiveness of the l1 optimal state estimation is demonstrated through some simulation results associated with the center of mass (CoM) estimation for a bipedal robot on its linear inverted pendulum model (LIPM).","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"792-797"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89607969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Gaze Behavior and Proxemics to Coordinate Conversational Roles in Group Interactions 机器人凝视行为和近距学在群体互动中协调会话角色
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515550
Karen Tatarian, Marine Chamoux, A. Pandey, M. Chetouani
With more social robots entering different industries such as educational systems, health-care facilities, and even airports, it is important to tackle problems that may hinder high quality interactions in a wild setting including group conversations. This paper presents an autonomous group conversational role coordinator system based on the proxemics of participants in the group with the robot including their distances and orientations. The system accordingly assigns to the group participants around the robot three different statuses: active, bystander, and overhearer. Once the statuses are estimated, the robot autonomously adjusts its gaze pattern in order to adapt to the group dynamics and attributes its attention in relation to the role the member in the group is playing. This system was evaluated through a pilot study (N=16), in which two participants at a time played a trivia game with the robot and had different roles to play within the interaction. The primary results imply that the participants interacting with a robot having this adaptive gaze behavior based on conversational role coordination are more likely to stand closer to the robot. In addition, the robot was perceived as more adaptable, sociable, and socially present as well as more likely to make the participants feel more attended to.
随着越来越多的社交机器人进入不同的行业,如教育系统、医疗设施,甚至机场,解决可能阻碍在野外环境(包括群体对话)中进行高质量互动的问题是很重要的。本文提出了一种基于群体参与者与机器人之间的距离和方向关系的自主群体会话角色协调系统。系统相应地给机器人周围的参与者分配了三种不同的状态:主动、旁观者和偷听者。一旦状态被估计,机器人就会自动调整其注视模式以适应群体动态,并将其注意力与群体中成员所扮演的角色联系起来。该系统通过一项试点研究(N=16)进行评估,在该研究中,两名参与者同时与机器人玩一个琐事游戏,并在互动中扮演不同的角色。初步结果表明,与具有这种基于会话角色协调的自适应注视行为的机器人互动的参与者更有可能站得离机器人更近。此外,机器人被认为适应性更强,更善于社交,更有可能让参与者感到更受关注。
{"title":"Robot Gaze Behavior and Proxemics to Coordinate Conversational Roles in Group Interactions","authors":"Karen Tatarian, Marine Chamoux, A. Pandey, M. Chetouani","doi":"10.1109/RO-MAN50785.2021.9515550","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515550","url":null,"abstract":"With more social robots entering different industries such as educational systems, health-care facilities, and even airports, it is important to tackle problems that may hinder high quality interactions in a wild setting including group conversations. This paper presents an autonomous group conversational role coordinator system based on the proxemics of participants in the group with the robot including their distances and orientations. The system accordingly assigns to the group participants around the robot three different statuses: active, bystander, and overhearer. Once the statuses are estimated, the robot autonomously adjusts its gaze pattern in order to adapt to the group dynamics and attributes its attention in relation to the role the member in the group is playing. This system was evaluated through a pilot study (N=16), in which two participants at a time played a trivia game with the robot and had different roles to play within the interaction. The primary results imply that the participants interacting with a robot having this adaptive gaze behavior based on conversational role coordination are more likely to stand closer to the robot. In addition, the robot was perceived as more adaptable, sociable, and socially present as well as more likely to make the participants feel more attended to.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"169 1","pages":"1297-1304"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89055348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Estimating Robot Body Torque for Two-Handed Cooperative Physical Human-Robot Interaction 基于双手协同人机物理交互的机器人体力矩估计
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515470
Johannes Møgster, M. Stoelen, E. Kyrkjebø
Cooperative physical Human Robot Interaction (pHRI) aims to combine the best of human problem-solving skills with the strength, speed and accuracy of a robot. When humans and robots physically interact, there will be interaction Forces and Torques (F/Ts) that must be within safe limits to avoid threats to human life and unacceptable damage to equipment. When measured, these F/Ts can be limited by safety rated emergency stops, and one can design a compliant robot behavior to reduce interaction F/Ts, and avoid unnecessary emergency stops. Several recent collaborative robots offer measurements of interaction F/Ts by utilizing torque sensors in joints or observers for joint torque, and the classical end-effector F/T sensor can provide measurements of interaction at the working end of a robot. The end-effector wrench can be calculated from joint torques if and only if there is no interaction on the robot body. Typically, safety limits are evaluated around a single point of contact – on the end-effector or elsewhere on the robot body. This approach fails when a human uses both hands to interact with a robot, e.g. when hand guiding or otherwise cooperating with the robot placing one hand on the robot end-effector and the other hand on the robot elbow. Having two points of contact that are evaluated as one will limit the allowed F/Ts of the sum of the contacts rather than individually. In this paper, we introduce the body torque as the interaction on the body that is not the result of interactions on the end-effector. We then use this body torque, which is a more accurate representation of the forces applied to the robot body, to limit the body interaction F/Ts to ensure safe human-robot interaction. Furthermore, the body torque can be used to design null-space compliance for a redundant robot. Distinguishing body torque is a step towards safe cooperative pHRI, where body torque, unknown end-effector loads, and end-effector interaction F/T are all important measurements for safety, control and compliance.
协作式物理人机交互(pHRI)旨在将人类解决问题的最佳技能与机器人的力量、速度和准确性结合起来。当人和机器人进行物理交互时,相互作用力和扭矩(F/Ts)必须在安全范围内,以避免对人类生命的威胁和对设备的不可接受的损坏。测量时,这些F/ t可以受到安全额定紧急停车的限制,并且可以设计一个兼容的机器人行为来减少交互F/ t,并避免不必要的紧急停车。最近一些协作机器人通过在关节中使用扭矩传感器或观察关节扭矩来提供相互作用F/T的测量,而经典的末端执行器F/T传感器可以提供机器人工作端的相互作用测量。当且仅当机器人本体上没有相互作用时,可以通过关节力矩来计算末端执行器扳手。通常,安全限制是围绕一个接触点进行评估-在末端执行器或机器人身体的其他地方。当人用双手与机器人互动时,这种方法就失效了,例如,当手引导或以其他方式与机器人合作时,将一只手放在机器人末端执行器上,另一只手放在机器人肘部上。将两个接点作为一个接点进行评估,将限制所有接点之和而不是单个接点的允许F/ t。在本文中,我们引入了体扭矩作为体上的相互作用,而不是末端执行器相互作用的结果。然后,我们使用这个身体扭矩,它是施加在机器人身体上的力的更准确的表示,来限制身体相互作用的F/ t,以确保安全的人机交互。此外,该体力矩可用于冗余机器人的零空间柔度设计。识别车身扭矩是迈向安全合作pHRI的一步,其中车身扭矩、未知的末端执行器载荷和末端执行器交互F/T都是安全性、控制性和顺应性的重要测量指标。
{"title":"Estimating Robot Body Torque for Two-Handed Cooperative Physical Human-Robot Interaction","authors":"Johannes Møgster, M. Stoelen, E. Kyrkjebø","doi":"10.1109/RO-MAN50785.2021.9515470","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515470","url":null,"abstract":"Cooperative physical Human Robot Interaction (pHRI) aims to combine the best of human problem-solving skills with the strength, speed and accuracy of a robot. When humans and robots physically interact, there will be interaction Forces and Torques (F/Ts) that must be within safe limits to avoid threats to human life and unacceptable damage to equipment. When measured, these F/Ts can be limited by safety rated emergency stops, and one can design a compliant robot behavior to reduce interaction F/Ts, and avoid unnecessary emergency stops. Several recent collaborative robots offer measurements of interaction F/Ts by utilizing torque sensors in joints or observers for joint torque, and the classical end-effector F/T sensor can provide measurements of interaction at the working end of a robot. The end-effector wrench can be calculated from joint torques if and only if there is no interaction on the robot body. Typically, safety limits are evaluated around a single point of contact – on the end-effector or elsewhere on the robot body. This approach fails when a human uses both hands to interact with a robot, e.g. when hand guiding or otherwise cooperating with the robot placing one hand on the robot end-effector and the other hand on the robot elbow. Having two points of contact that are evaluated as one will limit the allowed F/Ts of the sum of the contacts rather than individually. In this paper, we introduce the body torque as the interaction on the body that is not the result of interactions on the end-effector. We then use this body torque, which is a more accurate representation of the forces applied to the robot body, to limit the body interaction F/Ts to ensure safe human-robot interaction. Furthermore, the body torque can be used to design null-space compliance for a redundant robot. Distinguishing body torque is a step towards safe cooperative pHRI, where body torque, unknown end-effector loads, and end-effector interaction F/T are all important measurements for safety, control and compliance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"49 1","pages":"279-284"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89061894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
"Programming - It’s not for Normal People": A Qualitative Study on User-Empowering Interfaces for Programming Collaborative Robots “编程——它不适合普通人”:协作机器人编程用户授权界面的定性研究
Pub Date : 2021-08-08 DOI: 10.1109/RO-MAN50785.2021.9515535
G. Giannopoulou, Elsi-Mari Borrelli, Fiona McMaster
Technology can be empowering: It can lift the burden from tasks we find dreadful and help us excel in tasks we enjoy. Equally, a mismatch between the user interface (UI) and our skills can make us feel incompetent and reluctant towards the technology. With the trend to increase the level of automation in fields beyond traditional manufacturing, such as laboratories or small workshops, new technologies such as collaborative robots (cobots) are entering the workplace. The technology literacy levels of professionals in these fields may vary greatly depending on their age, gender, education and personal interests, creating a challenge in designing universal cobot UIs. In this qualitative study, we address the question of how introducing interaction skills and intelligence to cobots may inhibit or encourage users to use them in their work context. The interviews, performed with 15 individuals working in laboratory settings, gave rise to numerous themes relevant to the design of user-empowering cobot interfaces for individuals with varying technology literacy levels: As "programming may not be for a normal person", talking to a robot may not be for another. Incorporating the unique interests, fears, personal and domain experience of the end users can contribute to the design and development of cobot interfacesas diverse as their needs, thus maximizing the likelihood of successful integration across diverse work environments.
科技可以赋予我们力量:它可以减轻我们觉得可怕的任务的负担,帮助我们在喜欢的任务中表现出色。同样,用户界面(UI)和我们的技能之间的不匹配会让我们对技术感到无能和不情愿。随着传统制造业领域(如实验室或小作坊)自动化水平的提高,协作机器人(cobots)等新技术正在进入工作场所。这些领域的专业人员的技术素养水平可能因其年龄、性别、教育程度和个人兴趣而有很大差异,这给设计通用的协作机器人ui带来了挑战。在这项定性研究中,我们解决了将交互技能和智能引入协作机器人如何抑制或鼓励用户在工作环境中使用它们的问题。对15名在实验室环境中工作的人进行的采访,产生了许多与设计用户授权协作机器人界面相关的主题,这些界面适用于不同技术素养水平的个人:正如“编程可能不适合普通人”,与机器人交谈可能不适合另一个人。结合最终用户的独特兴趣、恐惧、个人和领域经验,可以帮助设计和开发与他们需求一样多样化的协作机器人界面,从而最大限度地提高跨不同工作环境成功集成的可能性。
{"title":"\"Programming - It’s not for Normal People\": A Qualitative Study on User-Empowering Interfaces for Programming Collaborative Robots","authors":"G. Giannopoulou, Elsi-Mari Borrelli, Fiona McMaster","doi":"10.1109/RO-MAN50785.2021.9515535","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515535","url":null,"abstract":"Technology can be empowering: It can lift the burden from tasks we find dreadful and help us excel in tasks we enjoy. Equally, a mismatch between the user interface (UI) and our skills can make us feel incompetent and reluctant towards the technology. With the trend to increase the level of automation in fields beyond traditional manufacturing, such as laboratories or small workshops, new technologies such as collaborative robots (cobots) are entering the workplace. The technology literacy levels of professionals in these fields may vary greatly depending on their age, gender, education and personal interests, creating a challenge in designing universal cobot UIs. In this qualitative study, we address the question of how introducing interaction skills and intelligence to cobots may inhibit or encourage users to use them in their work context. The interviews, performed with 15 individuals working in laboratory settings, gave rise to numerous themes relevant to the design of user-empowering cobot interfaces for individuals with varying technology literacy levels: As \"programming may not be for a normal person\", talking to a robot may not be for another. Incorporating the unique interests, fears, personal and domain experience of the end users can contribute to the design and development of cobot interfacesas diverse as their needs, thus maximizing the likelihood of successful integration across diverse work environments.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"37-44"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87770284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1