首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Multi-user Robot Impression with a Virtual Agent and Features Modification According to Real-time Emotion from Physiological Signals 基于虚拟代理的多用户机器人印象及基于生理信号的实时情感特征修改
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223585
Shoudai Suzuki, M. N. Anuardi, Peeraya Sripian, N. Matsuhira, Midori Sugaya
Communication robots are now getting popular. In particular, partner robots, which can perform personal services, are in high demand. However, they can be prohibitively expensive. Therefore, we considered a multi-user robot with a virtual agent service which could satisfy user demands. But, several issues need to be solved in order to achieve this purpose. Firstly, there is no general service platform for such robots. Secondly, even if we use the multi-user robot by executing the virtual agent service, the physical shape, and other characteristics of the multi-user robot sometimes creates a strong impression on users. Therefore, we proposed a virtual agent service platform, and the robot features modification for a multi-user robot. The robot can autonomously adjust its position according to each user’s physiological signals, which based on emotion in real-time. We presented a preliminary evaluation to determine whether the proposed method could improve users’ robot experience even for the users who are not familiar with the robot at all.
通信机器人现在越来越受欢迎。特别是,可以执行个人服务的伙伴机器人需求量很大。然而,它们可能贵得令人望而却步。因此,我们考虑了一种具有满足用户需求的虚拟代理服务的多用户机器人。但是,为了实现这一目标,需要解决几个问题。首先,这类机器人没有通用的服务平台。其次,即使我们通过执行虚拟代理服务来使用多用户机器人,多用户机器人的物理形状等特征有时也会给用户留下深刻的印象。因此,我们提出了一个虚拟代理服务平台,并对多用户机器人进行了特征修改。机器人可以根据每个用户的生理信号自动调整自己的位置,这些信号是基于实时的情绪。我们提出了一个初步的评估,以确定所提出的方法是否可以改善用户的机器人体验,甚至对于不熟悉机器人的用户。
{"title":"Multi-user Robot Impression with a Virtual Agent and Features Modification According to Real-time Emotion from Physiological Signals","authors":"Shoudai Suzuki, M. N. Anuardi, Peeraya Sripian, N. Matsuhira, Midori Sugaya","doi":"10.1109/RO-MAN47096.2020.9223585","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223585","url":null,"abstract":"Communication robots are now getting popular. In particular, partner robots, which can perform personal services, are in high demand. However, they can be prohibitively expensive. Therefore, we considered a multi-user robot with a virtual agent service which could satisfy user demands. But, several issues need to be solved in order to achieve this purpose. Firstly, there is no general service platform for such robots. Secondly, even if we use the multi-user robot by executing the virtual agent service, the physical shape, and other characteristics of the multi-user robot sometimes creates a strong impression on users. Therefore, we proposed a virtual agent service platform, and the robot features modification for a multi-user robot. The robot can autonomously adjust its position according to each user’s physiological signals, which based on emotion in real-time. We presented a preliminary evaluation to determine whether the proposed method could improve users’ robot experience even for the users who are not familiar with the robot at all.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114621121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[RO-MAN 2020 Front matter] [RO-MAN 2020前文]
Pub Date : 2020-08-01 DOI: 10.1109/ro-man47096.2020.9223538
{"title":"[RO-MAN 2020 Front matter]","authors":"","doi":"10.1109/ro-man47096.2020.9223538","DOIUrl":"https://doi.org/10.1109/ro-man47096.2020.9223538","url":null,"abstract":"","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124801524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robocentric Conversational Group Discovery 以机器人为中心的会话组发现
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223570
Viktor Schmuck, Tingran Sheng, O. Çeliktutan
Detecting people interacting and conversing with each other is essential to equipping social robots with autonomous navigation and service capabilities in crowded social scenes. In this paper, we introduced a method for unsupervised conversational group detection in images captured from a mobile robot's perspective. To this end, we collected a novel dataset called Robocentric Indoor Crowd Analysis (RICA). The RICA dataset features over 100,000 RGB, depth, and wide- angle camera images as well as LIDAR readings, recorded during a social event where the robot navigated between participants and captured interactions among groups using its on-board sensors. Using the RICA dataset, we implemented an unsupervised group detection method based on agglomerative hierarchical clustering. Our results show that incorporating the depth modality and using normalised features in the clustering algorithm improved group detection accuracy by a margin of 3% on average.
在拥挤的社交场景中,为社交机器人配备自主导航和服务能力,检测人们之间的互动和交谈至关重要。在本文中,我们介绍了一种从移动机器人角度捕获的图像中进行无监督会话组检测的方法。为此,我们收集了一个名为“以机器人为中心的室内人群分析”(RICA)的新数据集。RICA数据集具有超过100,000个RGB,深度和广角相机图像以及激光雷达读数,这些数据记录在机器人在参与者之间导航的社交活动中,并使用其机载传感器捕获群体之间的互动。利用RICA数据集,我们实现了一种基于聚类层次聚类的无监督组检测方法。我们的结果表明,在聚类算法中加入深度模态并使用归一化特征,可以将组检测精度平均提高3%。
{"title":"Robocentric Conversational Group Discovery","authors":"Viktor Schmuck, Tingran Sheng, O. Çeliktutan","doi":"10.1109/RO-MAN47096.2020.9223570","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223570","url":null,"abstract":"Detecting people interacting and conversing with each other is essential to equipping social robots with autonomous navigation and service capabilities in crowded social scenes. In this paper, we introduced a method for unsupervised conversational group detection in images captured from a mobile robot's perspective. To this end, we collected a novel dataset called Robocentric Indoor Crowd Analysis (RICA). The RICA dataset features over 100,000 RGB, depth, and wide- angle camera images as well as LIDAR readings, recorded during a social event where the robot navigated between participants and captured interactions among groups using its on-board sensors. Using the RICA dataset, we implemented an unsupervised group detection method based on agglomerative hierarchical clustering. Our results show that incorporating the depth modality and using normalised features in the clustering algorithm improved group detection accuracy by a margin of 3% on average.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123422407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Study on a Manipulatable Endoscope with Fins Knit by a Biodegradable String 可生物降解线编织可操纵鳍片内窥镜的研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223472
K. Makino, F. Iwamoto, Hiromi Watanabe, Tadashi Sato, H. Terada, Naoto Sekiya
The accuracy of the inspection of the capsule endoscope is increased, if it can be manipulated by the operator with wireless communication. To develop it, the feasibility is important, however there are various studies. We consider the behavior of the endoscope in the broken-down. Therefore, this paper describes the manipulatable capsule endoscope that can behaves as the normal endoscope even if it is broken in the body of the patient, and that is not modified from the normal endoscope drastically. The fin for the maneuverability is knit using the biodegradable string for the surgical operation which is dissolved in the body, since the various shape can be realized. The safety is guaranteed, since the fin is dissolved in case that it comes off in the body. And, the small motor is employed as the actuator for the movement of the fin in the fundamental experiment to prevent from changing the shape of the capsule endoscope. The proposed endoscpoe can behave as the normal capsule endoscope, since the shape is similar to the normal capsule endoscope. Using it, the feasibility of the proposed endoscope is confirmed by the fundamental experiments.
如果操作者可以通过无线通信对胶囊内窥镜进行操作,可以提高胶囊内窥镜的检测精度。要开发它,可行性是很重要的,但是有各种各样的研究。我们考虑内窥镜在故障中的行为。因此,本文描述了一种可操作胶囊内窥镜,即使它在患者体内破裂,也能像正常的内窥镜一样工作,而且它与正常的内窥镜没有太大的区别。机动性的鳍是用溶解在体内的生物可降解线编织而成的,可以实现各种形状。安全是有保证的,因为鱼鳍是溶解的,以防它在体内脱落。在基础实验中,为了防止胶囊内窥镜形状的改变,采用小型电机作为鳍片运动的致动器。由于其形状与普通胶囊内窥镜相似,因此该内窥镜可以作为普通胶囊内窥镜工作。通过基础实验验证了该内窥镜的可行性。
{"title":"Study on a Manipulatable Endoscope with Fins Knit by a Biodegradable String","authors":"K. Makino, F. Iwamoto, Hiromi Watanabe, Tadashi Sato, H. Terada, Naoto Sekiya","doi":"10.1109/RO-MAN47096.2020.9223472","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223472","url":null,"abstract":"The accuracy of the inspection of the capsule endoscope is increased, if it can be manipulated by the operator with wireless communication. To develop it, the feasibility is important, however there are various studies. We consider the behavior of the endoscope in the broken-down. Therefore, this paper describes the manipulatable capsule endoscope that can behaves as the normal endoscope even if it is broken in the body of the patient, and that is not modified from the normal endoscope drastically. The fin for the maneuverability is knit using the biodegradable string for the surgical operation which is dissolved in the body, since the various shape can be realized. The safety is guaranteed, since the fin is dissolved in case that it comes off in the body. And, the small motor is employed as the actuator for the movement of the fin in the fundamental experiment to prevent from changing the shape of the capsule endoscope. The proposed endoscpoe can behave as the normal capsule endoscope, since the shape is similar to the normal capsule endoscope. Using it, the feasibility of the proposed endoscope is confirmed by the fundamental experiments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"589 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bistable soft gripper with mechanically embedded sensing and actuation for fast grasping 双稳软爪,机械嵌入式传感和驱动快速抓取
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223487
T. G. Thuruthel, S. H. Abidi, M. Cianchetti, C. Laschi, E. Falotico
Soft robotic grippers are shown to be high effective for grasping unstructured objects with simple sensing and control strategies. However, they are still limited by their speed, sensing capabilities and actuation mechanism. Hence, their usage have been restricted in highly dynamic grasping tasks. This paper presents a soft robotic gripper with tunable bistable properties for sensor-less dynamic grasping. The bistable mechanism allows us to store arbitrarily large strain energy in the soft system which is then released upon contact. The mechanism also provides flexibility on the type of actuation mechanism as the grasping and sensing phase is completely passive. Theoretical background behind the mechanism is presented with finite element analysis to provide insights into design parameters. Finally, we experimentally demonstrate sensor-less dynamic grasping of an unknown object within 0.02 seconds, including the time to sense and actuate.
柔性机器人抓手具有较好的抓取非结构物体的能力,具有简单的传感和控制策略。然而,它们仍然受到速度、传感能力和驱动机制的限制。因此,它们在高动态抓取任务中的使用受到限制。提出了一种具有可调双稳态特性的柔性机械手,用于无传感器动态抓取。双稳态机构允许我们在软系统中存储任意大的应变能,然后在接触时释放。该机构还提供了灵活性类型的驱动机构,因为抓取和传感阶段是完全被动的。理论背景背后的机构提出了有限元分析,以提供洞察设计参数。最后,我们通过实验证明了在0.02秒内无传感器动态抓取未知物体,包括感知和驱动的时间。
{"title":"A bistable soft gripper with mechanically embedded sensing and actuation for fast grasping","authors":"T. G. Thuruthel, S. H. Abidi, M. Cianchetti, C. Laschi, E. Falotico","doi":"10.1109/RO-MAN47096.2020.9223487","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223487","url":null,"abstract":"Soft robotic grippers are shown to be high effective for grasping unstructured objects with simple sensing and control strategies. However, they are still limited by their speed, sensing capabilities and actuation mechanism. Hence, their usage have been restricted in highly dynamic grasping tasks. This paper presents a soft robotic gripper with tunable bistable properties for sensor-less dynamic grasping. The bistable mechanism allows us to store arbitrarily large strain energy in the soft system which is then released upon contact. The mechanism also provides flexibility on the type of actuation mechanism as the grasping and sensing phase is completely passive. Theoretical background behind the mechanism is presented with finite element analysis to provide insights into design parameters. Finally, we experimentally demonstrate sensor-less dynamic grasping of an unknown object within 0.02 seconds, including the time to sense and actuate.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Towards a Real-Time Cognitive Load Assessment System for Industrial Human-Robot Cooperation 面向工业人机协作的实时认知负荷评估系统研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223531
Akilesh Rajavenkatanarayanan, Harish Ram Nambiappan, Maria Kyrarini, F. Makedon
Robots are increasingly present in environments shared with humans. Robots can cooperate with their human teammates to achieve common goals and complete tasks. This paper focuses on developing a real-time framework that assesses the cognitive load of a human while cooperating with a robot to complete a collaborative assembly task. The framework uses multi-modal sensory data from Electrocardiography (ECG) and Electrodermal Activity (EDA) sensors, extracts novel features from the data, and utilizes machine learning methodologies to detect high or low cognitive load. The developed framework was evaluated on a collaborative assembly scenario with a user study. The results show that the framework is able to reliably recognize high cognitive load and it is a first step in enabling robots to understand better about their human teammates.
机器人越来越多地出现在与人类共享的环境中。机器人可以与人类队友合作,实现共同的目标,完成任务。本文的重点是开发一个实时框架来评估人类在与机器人合作完成协同装配任务时的认知负荷。该框架使用来自心电图(ECG)和皮电活动(EDA)传感器的多模态感觉数据,从数据中提取新特征,并利用机器学习方法检测高或低认知负荷。开发的框架在用户研究的协作组装场景中进行评估。结果表明,该框架能够可靠地识别高认知负荷,这是使机器人更好地了解其人类队友的第一步。
{"title":"Towards a Real-Time Cognitive Load Assessment System for Industrial Human-Robot Cooperation","authors":"Akilesh Rajavenkatanarayanan, Harish Ram Nambiappan, Maria Kyrarini, F. Makedon","doi":"10.1109/RO-MAN47096.2020.9223531","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223531","url":null,"abstract":"Robots are increasingly present in environments shared with humans. Robots can cooperate with their human teammates to achieve common goals and complete tasks. This paper focuses on developing a real-time framework that assesses the cognitive load of a human while cooperating with a robot to complete a collaborative assembly task. The framework uses multi-modal sensory data from Electrocardiography (ECG) and Electrodermal Activity (EDA) sensors, extracts novel features from the data, and utilizes machine learning methodologies to detect high or low cognitive load. The developed framework was evaluated on a collaborative assembly scenario with a user study. The results show that the framework is able to reliably recognize high cognitive load and it is a first step in enabling robots to understand better about their human teammates.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131381146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Relevant Perception Modalities for Flexible Human-Robot Teams 柔性人机团队的相关感知模式
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223593
Nico Höllerich, D. Henrich
Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.
当人类在工业或家庭环境中与机器人合作时,稳健可靠的感知起着重要作用。各种外显和内隐的沟通方式和感知方法可以用来识别表达的意图。根据不同的模式,需要使用不同的传感器、观察区域和感知方法。更多的模式增加了设置的复杂性和成本。我们考虑在一个潜在的嘈杂环境中进行合作任务的场景,在这种环境中语言交流几乎是不可行的。我们的目标是研究不同的非语言交流模式对意图识别的重要性。为此,我们建立在人类合作的既定基准研究基础上,并调查哪种输入方式最有助于识别表达的意图。为了测量检出率,我们进行了第二次研究。参与者必须根据符号输入数据流预测动作。研究结果证实了一个通用手势字典的存在,以及当可行动作数量增加时手部跟踪对动作预测的重要性。这项工作的贡献是手势的使用排名和输入模式的比较,以提高人机合作的预测能力。
{"title":"Relevant Perception Modalities for Flexible Human-Robot Teams","authors":"Nico Höllerich, D. Henrich","doi":"10.1109/RO-MAN47096.2020.9223593","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223593","url":null,"abstract":"Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions 评估人机交互的基准:从人与动物的交互中学到的经验教训
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223347
E. Lagerstedt, Serge Thill
Human-robot interaction (HRI) is fundamentally concerned with studying the interaction between humans and robots. While it is still a relatively young field, it can draw inspiration from other disciplines studying human interaction with other types of agents. Often, such inspiration is sought from the study of human-computer interaction (HCI) and the social sciences studying human-human interaction (HHI). More rarely, the field also turns to human-animal interaction (HAI).In this paper, we identify two distinct underlying motivations for making such comparisons: to form a target to recreate or to obtain a benchmark (or baseline) for evaluation. We further highlight relevant (existing) overlap between HRI and HAI, and identify specific themes that are of particular interest for further trans-disciplinary exploration. At the same time, since robots and animals are clearly not the same, we also discuss important differences between HRI and HAI, their complementarity notwithstanding. The overall purpose of this discussion is thus to create an awareness of the potential mutual benefit between the two disciplines and to describe opportunities that exist for future work, both in terms of new domains to explore, and existing results to learn from.
人机交互(Human-robot interaction, HRI)主要研究人与机器人之间的交互作用。虽然这仍然是一个相对年轻的领域,但它可以从其他研究人类与其他类型代理相互作用的学科中汲取灵感。通常,这种灵感是从人机交互(HCI)和研究人机交互(HHI)的社会科学的研究中寻求的。更罕见的是,该领域也转向人与动物的相互作用(HAI)。在本文中,我们确定了进行这种比较的两个不同的潜在动机:形成一个重新创建的目标或获得一个基准(或基线)进行评估。我们进一步强调了HRI和HAI之间的相关(现有)重叠,并确定了对进一步跨学科探索特别感兴趣的特定主题。同时,由于机器人和动物显然不一样,我们也讨论了HRI和HAI之间的重要区别,尽管它们具有互补性。因此,本次讨论的总体目的是建立对两个学科之间潜在互惠利益的认识,并描述存在于未来工作中的机会,无论是在探索的新领域方面,还是从现有的结果中学习。
{"title":"Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions","authors":"E. Lagerstedt, Serge Thill","doi":"10.1109/RO-MAN47096.2020.9223347","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223347","url":null,"abstract":"Human-robot interaction (HRI) is fundamentally concerned with studying the interaction between humans and robots. While it is still a relatively young field, it can draw inspiration from other disciplines studying human interaction with other types of agents. Often, such inspiration is sought from the study of human-computer interaction (HCI) and the social sciences studying human-human interaction (HHI). More rarely, the field also turns to human-animal interaction (HAI).In this paper, we identify two distinct underlying motivations for making such comparisons: to form a target to recreate or to obtain a benchmark (or baseline) for evaluation. We further highlight relevant (existing) overlap between HRI and HAI, and identify specific themes that are of particular interest for further trans-disciplinary exploration. At the same time, since robots and animals are clearly not the same, we also discuss important differences between HRI and HAI, their complementarity notwithstanding. The overall purpose of this discussion is thus to create an awareness of the potential mutual benefit between the two disciplines and to describe opportunities that exist for future work, both in terms of new domains to explore, and existing results to learn from.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115001462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
LinkBricks: A Construction Kit for Intuitively Creating and Programming Interactive Robots LinkBricks:用于直观地创建和编程交互式机器人的构建工具包
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223607
Jiasi Gao, Meng Wang, Y. Zhu, Haipeng Mi
This paper presents LinkBricks, a creative construction kit for intuitively creating and programming interactive robots towards young children. Integrating building blocks, a hierarchical programming framework and a tablet application, this kit is proposed to maintain the low floor and wide walls for children who lack knowledge in conventional programming. The blocks have LEGO-compatible interlock structures and are embedded with various wireless sensors and actuators to create different interactive robots. The programming application is easy-to-use and provides heuristics to involve children in the creative activities. A preliminary evaluation is conducted and indicates that LinkBricks increases young children’s engagement with, comfort with, and interest in working with interactive robots. Meanwhile, it has the potential of helping them to learn the concepts of programming and robots.
本文介绍了LinkBricks,一个用于直观地创建和编程面向幼儿的交互式机器人的创造性构建工具包。该套件集成了积木,分层编程框架和平板应用程序,旨在为缺乏传统编程知识的儿童保持低地板和宽墙壁。这些积木具有与乐高玩具兼容的联锁结构,并嵌入各种无线传感器和执行器,以创建不同的互动机器人。编程应用程序易于使用,并提供启发,使儿童参与创造性活动。初步评估表明,LinkBricks增加了幼儿与互动机器人的接触、舒适度和工作兴趣。同时,它有可能帮助他们学习编程和机器人的概念。
{"title":"LinkBricks: A Construction Kit for Intuitively Creating and Programming Interactive Robots","authors":"Jiasi Gao, Meng Wang, Y. Zhu, Haipeng Mi","doi":"10.1109/RO-MAN47096.2020.9223607","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223607","url":null,"abstract":"This paper presents LinkBricks, a creative construction kit for intuitively creating and programming interactive robots towards young children. Integrating building blocks, a hierarchical programming framework and a tablet application, this kit is proposed to maintain the low floor and wide walls for children who lack knowledge in conventional programming. The blocks have LEGO-compatible interlock structures and are embedded with various wireless sensors and actuators to create different interactive robots. The programming application is easy-to-use and provides heuristics to involve children in the creative activities. A preliminary evaluation is conducted and indicates that LinkBricks increases young children’s engagement with, comfort with, and interest in working with interactive robots. Meanwhile, it has the potential of helping them to learn the concepts of programming and robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129632426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tell me more! A Robot’s Struggle to Achieve Artificial Awareness 告诉我更多!机器人努力实现人工意识
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223458
H. Sirithunge, K. S. Priyanayana, Ravindu T. Bandara, Nikolas Dahn, A. Jayasekara, Chandima Dedduwa Chandima
There are many cognitive and psychophysical theories to explain human behavior as well as the behavior of robots. Even so, we still lack a model to perceive and predict appropriate behaviors for both the human and the robot during a human-robot encounter. Humans make an instant evaluation of their surroundings and its people before approaching a person or a situation. As robots become more common in social environments, a similar perception of the situation around a human user prior to an interaction is required. Social constraints during an interaction could be demolished by a faulty assessment. Through this paper, we discuss the requirements of a robot to proactively perceive a situation’s nature and take an effort to report functional units which come into play during such an encounter. We further identify the cues that are utilized by such intelligent agents to simulate and evaluate the outcomes of their environment. From this, we discuss the requirements of a unified theory of cognition during human-robot encounters. We also highlight implications for design constraints in such a scenario.
有许多认知和心理物理理论来解释人类的行为以及机器人的行为。即便如此,我们仍然缺乏一个模型来感知和预测人类和机器人在人机相遇时的适当行为。在接近一个人或一种情况之前,人类会立即对周围的环境和周围的人做出评估。随着机器人在社交环境中变得越来越普遍,在与人类用户进行交互之前,需要对其周围的情况有类似的感知。互动过程中的社会约束可能会被错误的评估所破坏。通过本文,我们讨论了机器人主动感知情况性质的要求,并努力报告在这种遭遇中发挥作用的功能单元。我们进一步确定了这些智能代理用来模拟和评估其环境结果的线索。从这一点出发,我们讨论了在人机接触过程中统一的认知理论的要求。我们还强调了在这种情况下设计约束的含义。
{"title":"Tell me more! A Robot’s Struggle to Achieve Artificial Awareness","authors":"H. Sirithunge, K. S. Priyanayana, Ravindu T. Bandara, Nikolas Dahn, A. Jayasekara, Chandima Dedduwa Chandima","doi":"10.1109/RO-MAN47096.2020.9223458","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223458","url":null,"abstract":"There are many cognitive and psychophysical theories to explain human behavior as well as the behavior of robots. Even so, we still lack a model to perceive and predict appropriate behaviors for both the human and the robot during a human-robot encounter. Humans make an instant evaluation of their surroundings and its people before approaching a person or a situation. As robots become more common in social environments, a similar perception of the situation around a human user prior to an interaction is required. Social constraints during an interaction could be demolished by a faulty assessment. Through this paper, we discuss the requirements of a robot to proactively perceive a situation’s nature and take an effort to report functional units which come into play during such an encounter. We further identify the cues that are utilized by such intelligent agents to simulate and evaluate the outcomes of their environment. From this, we discuss the requirements of a unified theory of cognition during human-robot encounters. We also highlight implications for design constraints in such a scenario.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132312137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1