首页 > 最新文献

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)最新文献

英文 中文
Bright and Dark Timbre Expressions with Sound Pressure and Tempo Variations by Violin-playing Robot* 小提琴手机器人用声压和节奏变化表达明暗音色*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223503
K. Shibuya, Kento Kosuga, H. Fukuhara
This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.
本次研究的目的是建立一个小提琴演奏机器人,它可以根据乐谱中包含的信息自动决定如何演奏。在本文中,我们讨论了每个小节的节奏和每个音符的声压的变奏模式的设计,以产生能够传达明暗印象的声音。首先,我们给出了一位训练有素的小提琴家演奏的分析结果,在分析结果中,我们发现明亮音色的速度比黑暗音色的速度快,并且明亮的演奏包括声压模式的几个陡峭变化。在此基础上提出了一种表现明暗音色的设计方法。在实验中,声音是由我们的拟人化小提琴演奏机器人发出的,它可以通过改变手腕关节的角度来改变声压。对机器人发出的声音进行了分析,我们证实了机器人发出的声压模式与设计的机器人相似。我们还对10名受试者的声音进行了评估,我们发现,当声压和节奏变化包括在内时,他们区分了明亮的表演和黑暗的表演。
{"title":"Bright and Dark Timbre Expressions with Sound Pressure and Tempo Variations by Violin-playing Robot*","authors":"K. Shibuya, Kento Kosuga, H. Fukuhara","doi":"10.1109/RO-MAN47096.2020.9223503","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223503","url":null,"abstract":"This study aims to build a violin-playing robot that can automatically determine how to perform based on the information included in musical scores. In this paper, we discuss the design of the variation pattern for the tempo of every bar and the sound pressure of every musical note to produce sounds that can convey bright and dark impressions. First, we present the analytical results of a trained violinist’s performance, in which we found that the tempo of the bright timbre is faster than that of the dark timbre, and the bright performance includes several steep variations in the sound pressure pattern. We then propose a design method for the performance to express bright and dark timbres based on the analytical results. In the experiments, sounds were produced by our anthropomorphic violin-playing robot, which can vary the sound pressure by varying a wrist joint angle. The sounds produced by the robot were analyzed, and we confirmed that the patterns of the produced sound pressure for the bright performance are similar to those of the designed one. The sounds were also evaluated by ten subjects, and we found that they distinguished the bright performances from the dark ones when the sound pressure and tempo variations were included.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116726418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Influences of Media Literacy and Experiences of Robots into Negative Attitudes toward Robots in Japan 媒介素养和机器人体验对日本人对机器人的负面态度的影响
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223590
T. Nomura, Shun Horii
To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.
为了研究媒体素养对机器人体验和负面态度的影响,我们在日本进行了一项在线调查(N = 500)。结果表明,机器人体验与媒介素养和机器人消极态度之间的联系较弱,媒介素养和机器人体验对与机器人互动的消极态度都有负向影响。
{"title":"Influences of Media Literacy and Experiences of Robots into Negative Attitudes toward Robots in Japan","authors":"T. Nomura, Shun Horii","doi":"10.1109/RO-MAN47096.2020.9223590","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223590","url":null,"abstract":"To investigate influences of media literacy into experiences of and negative attitudes toward robots, an online survey was conducted in Japan (N = 500). The results suggested that the connections of robot experiences with media literacy and negative attitudes toward robots were weak, and both media literacy and robot experiences had negative effects on negative attitudes toward interaction with robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125597815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Adaptive Control Approach to Robotic Assembly with Uncertainties in Vision and Dynamics 具有视觉和动力学不确定性的机器人装配自适应控制方法
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223515
Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani
The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.
本文的目的是提出一种自适应阻抗控制框架,以应对机器人装配任务中视觉和动力学的不确定性。该框架由自适应控制器、视觉系统和交互规划器组成,并由有限状态机进行监督。在该框架中,通过视觉模块检测目标装配对象的姿态,然后将其用于机器人轨迹的规划。自适应阻抗控制模块处理视觉的不确定性和交互规划模块在装配部件的对齐(在这项工作中是一个钉和一个孔)。与传统的阻抗控制器不同,在线自适应规则调节机器人在受约束方向上的顺应程度,并对外力进行作用和响应。这允许使用主动控制实现灵活和自适应的远程遵从性中心(RCC)系统。我们首先评估了所提出的自适应控制器与经典阻抗控制的性能。接下来,在不同间隙和方向不匹配的情况下,对集成系统的整体性能进行评估。
{"title":"An Adaptive Control Approach to Robotic Assembly with Uncertainties in Vision and Dynamics","authors":"Emir Mobedi, Nicola Villa, Wansoo Kim, A. Ajoudani","doi":"10.1109/RO-MAN47096.2020.9223515","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223515","url":null,"abstract":"The objective of this paper is to propose an adaptive impedance control framework to cope with uncertainties in vision and dynamics in robotic assembly tasks. The framework is composed of an adaptive controller, a vision system, and an interaction planner, which are all supervised by a finite state machine. In this framework, the target assembly object’s pose is detected through the vision module, which is then used for the planning of the robot trajectories. The adaptive impedance control module copes with the uncertainties of the vision and the interaction planner modules in alignment of the assembly parts (a peg and a hole in this work). Unlike the classical impedance controllers, the online adaptation rule regulates the level of robot compliance in constrained directions, acting on and responding to the external forces. This enables the implementation of a flexible and adaptive Remote Center of Compliance (RCC) system, using active control. We first evaluate the performance of the proposed adaptive controller in comparison to classical impedance control. Next, the overall performance of the integrated system is evaluated in a peg-in-hole setup, with different clearances and orientation mismatches.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113963196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pedestrian Density Based Path Recognition and Risk Prediction for Autonomous Vehicles 基于行人密度的自动驾驶汽车路径识别与风险预测
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223554
Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner
Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.
人类驾驶员不断地使用社会信息来为他们的决策提供信息。我们相信,将这些信息纳入自动驾驶汽车的决策将提高性能,更重要的是提高安全性。本文研究了如何使用行人密度形式的信息来识别正在行驶的路径,并预测未来车辆将在该路径上遇到的行人数量。我们提出了使用驾驶时捕获的摄像头数据来评估我们的路径识别和行人密度预测方法的实验。结果表明,仅使用行人密度识别车辆路径的准确率为92.4%,预测车辆将遇到的行人数量的准确率为70.45%。这些结果表明,行人密度可以作为增强定位和路径风险预测的信息来源。
{"title":"Pedestrian Density Based Path Recognition and Risk Prediction for Autonomous Vehicles","authors":"Kasra Mokhtari, Ali Ayub, Vidullan Surendran, Alan R. Wagner","doi":"10.1109/RO-MAN47096.2020.9223554","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223554","url":null,"abstract":"Human drivers continually use social information to inform their decision making. We believe that incorporating this information into autonomous vehicle decision making would improve performance and importantly safety. This paper investigates how information in the form of pedestrian density can be used to identify the path being travelled and predict the number of pedestrians that the vehicle will encounter along that path in the future. We present experiments which use camera data captured while driving to evaluate our methods for path recognition and pedestrian density prediction. Our results show that we can identify the vehicle’s path using only pedestrian density at 92.4% accuracy and we can predict the number of pedestrians the vehicle will encounter with an accuracy of 70.45%. These results demonstrate that pedestrian density can serve as a source of information both perhaps to augment localization and for path risk prediction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128083010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent 在与通信代理交互过程中使用视觉信息评估心理健康生活质量
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223606
S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi
It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the "mental health" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.
与老人进行互动的监控系统或通信机器人必须准确地了解用户的状态,并根据他们的情况产生相应的动作。为了确保老年人的福利,生活质量(QOL)是一个有用的指标,以确定人类的身体痛苦和精神和社会活动的综合方式。在这项研究中,我们假设视觉信息对于从智能体与人交互时收集的数据中提取关于生活质量的高维信息是有用的。我们提出了一种QOL估计方法来整合面部表情、头部波动和眼球运动,这些可以在与通信代理交互过程中作为视觉信息提取出来。我们的目标是实现一个包含卷积3D的多特征向量学习估计器来学习时空特征。然而,不需要数据库来估计生活质量。因此,我们实现了一个自由通信代理,并利用该代理通过人际实验收集到的信息来构建我们的数据库。为了验证所提出的方法,我们重点对构成生活质量的8个量表中最难估计的“心理健康”生活质量量表进行了估计。我们比较了四种估计精度:使用三个特征中的每一个特征进行单模态学习,即面部表情、头部波动和眼球运动,以及集成所有三个特征的多特征向量学习。实验结果表明,多特征向量学习比单独使用每个特征的单模态学习具有更小的估计误差。实验结果表明,该方法估计的生活质量分数与传统方法计算的实际生活质量分数的差值平均误差小于10分,表明该系统可以估计生活质量分数。因此,很明显,提出的评估人类状况的新方法可以提高人机交互和个性化监测的质量。
{"title":"Estimation of Mental Health Quality of Life using Visual Information during Interaction with a Communication Agent","authors":"S. Nakagawa, S. Yonekura, Hoshinori Kanazawa, Satoshi Nishikawa, Y. Kuniyoshi","doi":"10.1109/RO-MAN47096.2020.9223606","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223606","url":null,"abstract":"It is essential for a monitoring system or a communication robot that interacts with an elderly person to accurately understand the user’s state and generate actions based on their condition. To ensure elderly welfare, quality of life (QOL) is a useful indicator for determining human physical suffering and mental and social activities in a comprehensive manner. In this study, we hypothesize that visual information is useful for extracting high-dimensional information on QOL from data collected by an agent while interacting with a person. We propose a QOL estimation method to integrate facial expressions, head fluctuations, and eye movements that can be extracted as visual information during the interaction with the communication agent. Our goal is to implement a multiple feature vectors learning estimator that incorporates convolutional 3D to learn spatiotemporal features. However, there is no database required for QOL estimation. Therefore, we implement a free communication agent and construct our database based on information collected through interpersonal experiments using the agent. To verify the proposed method, we focus on the estimation of the \"mental health\" QOL scale, which is the most difficult to estimate among the eight scales that compose QOL based on a previous study. We compare the four estimation accuracies: single-modal learning using each of the three features, i.e., facial expressions, head fluctuations, and eye movements and multiple feature vectors learning integrating all the three features. The experimental results show that multiple feature vectors learning has fewer estimation errors than all the other single-modal learning, which uses each feature separately. The experimental results for evaluating the difference between the estimated QOL score by the proposed method and the actual QOL score calculated by the conventional method also show that the average error is less than 10 points and, thus, the proposed system can estimate the QOL score. Thus, it is clear that the proposed new approach for estimating human conditions can improve the quality of human–robot interactions and personalized monitoring.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Influence of vertical acceleration for inducing sensation of dropping by lower limb force feedback device 垂直加速度对下肢力反馈装置诱导跌落感的影响
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223476
Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura
Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.
目前正在为人类上肢开发许多触觉装置。上肢力反馈装置有多种类型,如台式和可穿戴式。然而,站立或行走时,下肢吸收了大部分的力。因此,为了将力的感觉传递到下肢,一种像鞋子一样的穿戴设备被开发出来,使使用者能够行走并有广泛的活动范围,一种提供跌落感的设备也被开发出来。然而,广域移动和跌落感不能同时存在于一个设备中。因此,作者提出了一种下肢力反馈装置的概念,使用户可以像穿鞋一样穿着它,在实现大面积运动的同时提供摔倒的感觉。此外,作为设备开发的第一阶段,作者评估了人的跌落感觉。结果发现,即使加速度小于实际空间中的重力加速度,也能给人提供相对较高的落地感。因此,未来将开发的下肢力反馈装置将允许用户在真实空间中使用小于重力加速度的加速度来体验跌落的感觉。
{"title":"Influence of vertical acceleration for inducing sensation of dropping by lower limb force feedback device","authors":"Toshinari Tanaka, Yuki Onozuka, M. Okui, Rie Nishihama, Taro Nakamura","doi":"10.1109/RO-MAN47096.2020.9223476","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223476","url":null,"abstract":"Many haptic devices are currently being developed for human upper limbs. There are various types of force feedback devices for upper limbs, such as desktop and wearable type. However, the lower limbs absorb most of the force when standing or walking. Therefore, to render the sensation of force to the lower limbs, a device worn like a shoe to enable users to walk and have a wide range of movement and a device that provides a dropping sensation have been developed. However, both wide-area movement and a dropping sensation could not be combined in one device. Therefore, the authors propose the concept of a lower limb force feedback device that allows the user to wear it like a shoe and provides the sensation of dropping while enabling wide-area movement. In addition, as the first stage of device development, the authors evaluated the human sensation of dropping. Consequently, it was found that a relatively high sensation of dropping can be provided to a human even with an acceleration smaller than the gravitational acceleration in real space. Thus, the lower limb force feedback device to be developed in the future will allow the user to experience the sensation of dropping by using an acceleration smaller than the gravitational acceleration in real space.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132294196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics 移动机器人RGB-D目标识别精度与推理速度的实验研究
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223562
Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes
This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.
本文针对移动平台应用中RGB-D目标检测和分类的准确性和推理速度进行了研究。研究分为三个阶段。首先,使用八个最先进的基于cnn的对象分类器(AlexNet, VGG16-19, ResNet1850-101, DenseNet和MobileNetV2)来比较所获得的性能与对象分类任务中相应的推理速度。第二阶段是利用YOLOv3/YOLOv3微型网络作为感兴趣区域生成方法。为了获得实时的目标识别流水线,最后阶段将YOLOv3/YOLOv3-tiny与基于cnn的目标分类器进行统一。该管道根据每个感兴趣区域生成器方法的准确性和帧速率评估每个对象分类器。为了评估在真实机器人平台导航条件下提出的研究,使用ISR-InterBot移动平台上的相机在系统和机器人研究所的设施中记录了一个非物体中心的RGB-D数据集。在Washington和COCO数据集上也进行了实验评估。YOLOv3tiny和ResNet18网络在嵌入式硬件Nvidia Jetson TX2上的结合取得了令人满意的性能。
{"title":"An Experimental Study of the Accuracy vs Inference Speed of RGB-D Object Recognition in Mobile Robotics","authors":"Ricardo Pereira, T. Barros, L. Garrote, Ana C. Lopes, U. Nunes","doi":"10.1109/RO-MAN47096.2020.9223562","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223562","url":null,"abstract":"This paper presents a study in terms of accuracy and inference speed using RGB-D object detection and classification for mobile platform applications. The study is divided in three stages. In the first, eight state-of-the-art CNN-based object classifiers (AlexNet, VGG16-19, ResNet1850-101, DenseNet, and MobileNetV2) are used to compare the attained performances with the corresponding inference speeds in object classification tasks. The second stage consists in exploiting YOLOv3/YOLOv3-tiny networks to be used as Region of Interest generator method. In order to obtain a real-time object recognition pipeline, the final stage unifies the YOLOv3/YOLOv3-tiny with a CNN-based object classifier. The pipeline evaluates each object classifier with each Region of Interest generator method in terms of their accuracy and frame rate. For the evaluation of the proposed study under the conditions in which real robotic platforms navigate, a nonobject centric RGB-D dataset was recorded in Institute of Systems and Robotics facilities using a camera on-board the ISR-InterBot mobile platform. Experimental evaluations were also carried out in Washington and COCO datasets. Promising performances were achieved by the combination of YOLOv3tiny and ResNet18 networks on the embedded hardware Nvidia Jetson TX2.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"331 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133767677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multiple-Robot Mediated Discussion System to support group discussion * 多机器人调解讨论系统支持小组讨论*
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223444
Shogo Ikari, Y. Yoshikawa, H. Ishiguro
Deep discussions on topics without definite answers are important for society, but they are also challenging to facilitate. Recently, advances in the technology of using robots to facilitate discussions have been made. In this study, we developed a multiple-robot mediated discussion system (m-RMDS) to support discussions by having multiple robots assert their own points and lead a dialogue in a group of human participants. The robots involved the participants in a discussion through asking them for advice. We implemented the m-RMDS in discussions on difficult topics with no clear answers. A within-subject experiment with 16 groups (N=64) was conducted to evaluate the contribution of the m-RMDS. The participants completed a questionnaire about their discussion skills and their self-confidence. Then, they participated in two discussions, one facilitated by the m-RMDS and one that was unfacilitated. They evaluated and compared both experiences across multiple aspects. The participants with low confidence in conducting a discussion evaluated the discussion with m-RMDS as easier to move forward than the discussion without m-RMDS. Furthermore, they reported that they heard more of others' frank opinions during the facilitated discussion than during the unfacilitated one. In addition, regardless of their confidence level, the participants tended to respond that they would like to use the system again. We also review necessary improvements to the system and suggest future applications.
对没有明确答案的话题进行深入讨论对社会很重要,但促进这些话题也很有挑战性。最近,利用机器人促进讨论的技术取得了进展。在这项研究中,我们开发了一个多机器人介导的讨论系统(m-RMDS),通过让多个机器人坚持自己的观点并在一组人类参与者中引导对话来支持讨论。机器人通过征求参与者的建议,让他们参与讨论。我们在讨论没有明确答案的难题时实施了m-RMDS。进行了16组(N=64)的受试者内实验,以评估m-RMDS的贡献。参与者完成了一份关于他们的讨论技巧和自信心的调查问卷。然后,他们参加了两次讨论,一次是由m-RMDS促成的,另一次是没有促成的。他们从多个方面对两种经历进行了评估和比较。对进行讨论缺乏信心的参与者评价有m-RMDS的讨论比没有m-RMDS的讨论更容易推进。此外,他们报告说,在有帮助的讨论中,他们比在没有帮助的讨论中听到了更多其他人的坦率意见。此外,不管他们的信心水平如何,参与者倾向于回答他们想再次使用该系统。我们还审查了系统的必要改进,并建议未来的应用。
{"title":"Multiple-Robot Mediated Discussion System to support group discussion *","authors":"Shogo Ikari, Y. Yoshikawa, H. Ishiguro","doi":"10.1109/RO-MAN47096.2020.9223444","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223444","url":null,"abstract":"Deep discussions on topics without definite answers are important for society, but they are also challenging to facilitate. Recently, advances in the technology of using robots to facilitate discussions have been made. In this study, we developed a multiple-robot mediated discussion system (m-RMDS) to support discussions by having multiple robots assert their own points and lead a dialogue in a group of human participants. The robots involved the participants in a discussion through asking them for advice. We implemented the m-RMDS in discussions on difficult topics with no clear answers. A within-subject experiment with 16 groups (N=64) was conducted to evaluate the contribution of the m-RMDS. The participants completed a questionnaire about their discussion skills and their self-confidence. Then, they participated in two discussions, one facilitated by the m-RMDS and one that was unfacilitated. They evaluated and compared both experiences across multiple aspects. The participants with low confidence in conducting a discussion evaluated the discussion with m-RMDS as easier to move forward than the discussion without m-RMDS. Furthermore, they reported that they heard more of others' frank opinions during the facilitated discussion than during the unfacilitated one. In addition, regardless of their confidence level, the participants tended to respond that they would like to use the system again. We also review necessary improvements to the system and suggest future applications.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133415793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Effects of Internet of Robotic Things on In-home Social Family Relationships 机器人物联网对家庭社会关系的影响
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223345
Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi
Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.
机器人和社交机器人已经被引入家庭,它们有望改变人类之间的关系。我们的研究考察了机器人或社交机器人的引入,以及它们的组织方式,是否能改变家庭成员之间的社会关系。为了观察这一现象,我们设计了一个模拟家庭环境的生活实验室实验,并招募了两个家庭参与。家庭被要求在两种不同类型的机器人物联网(IoRT)中进行家庭活动:1)仅机器人物联网(不带中介条件的IoRT)和2)由社交机器人介导的机器人物联网(带中介条件的IoRT)。我们在实验过程中记录了家庭成员和机器人之间的互动,并将它们编码成一个数据集,用于社会网络分析。结果揭示了两种情况之间的关系差异。没有中介的IoRT的引入促使年轻一代家庭成员分担照顾其他成员的负担,这在以前是母亲的责任。然而,这使得交互网络无法进行间接交互。相反,在行为者层面上,引入有中介的IoRT并没有显著改变家庭关系,母亲仍然负责照顾其他家庭成员。然而,带中介的IoRT使网络内的间接交互更有效。此外,社交机器人调解人的角色与母亲的角色重叠。这说明社会机器人调解员可以通过操作和管理机器人的东西来帮助母亲照顾其他家庭成员。此外,我们还讨论了为家庭开发IoRT的意义。
{"title":"The Effects of Internet of Robotic Things on In-home Social Family Relationships","authors":"Byeong June Moon, Sonya S. Kwak, Dahyun Kang, Hanbyeol Lee, Jong-suk Choi","doi":"10.1109/RO-MAN47096.2020.9223345","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223345","url":null,"abstract":"Robotic things and social robots have been introduced into home, and they are expected to change the relationships between humans. Our study examines whether the introduction of robotic things or social robots, and the way that they are organized, can change the social relationship between family members. To observe this phenomenon, we designed a living lab experiment that simulated a home environment and recruited two families to participate. Families were asked to conduct home activities within two different types of Internet of Robotic Things(IoRT):1)internet of only robotic things(IoRT without mediator condition), and 2)internet of robotic things mediated by a social robot(IoRT with mediator condition). We recorded the interactions between the family members and the robotic things during the experiments and coded them into a dataset for social network analysis. The results revealed relationship differences between the two conditions. The introduction of IoRT without mediator motivated younger generation family members to share the burden of caring for other members, which was previously the duty of the mothers. However, this made the interaction network inefficient to do indirect interaction. On the contrary, introducing IoRT with mediator did not significantly change family relationships at the actor-level, and the mothers remained in charge of caring for other family members. However, IoRT with mediator made indirect interactions within the network more efficient. Furthermore, the role of the social robot mediator overlapped with that of the mothers. This shows that a social robot mediator can help the mothers care for other members of the family by operating and managing robotic things. Additionally, we discussed the implications for developing the IoRT for home.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117180683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Teaching Robots Novel Objects by Pointing at Them 通过指向机器人来教授它们新的物体
Pub Date : 2020-08-01 DOI: 10.1109/RO-MAN47096.2020.9223596
S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur
Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.
机器人必须在新环境中操作并与人类协作,必须能够在操作过程中从人类专家那里获取新知识。我们建议通过将一只手指向感兴趣的新物体来教机器人以前没有遇到过的新物体。使用端到端神经网络来关注由手指指向的新物体,然后在新场景中定位物体。为了注意到手指指向的新物体,我们提出了一种空间注意力调制机制,该机制学习关注突出显示的物体而忽略场景中的其他物体。我们展示了一个机器人手臂可以通过用手指向突出显示的新物体来操纵它们。我们还在使用表情符号构建的合成数据集和真实世界的常见对象数据集上评估了所提出的架构的性能。
{"title":"Teaching Robots Novel Objects by Pointing at Them","authors":"S. Gubbi, Raviteja Upadrashta, Shishir N. Y. Kolathaya, B. Amrutur","doi":"10.1109/RO-MAN47096.2020.9223596","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223596","url":null,"abstract":"Robots that must operate in novel environments and collaborate with humans must be capable of acquiring new knowledge from human experts during operation. We propose teaching a robot novel objects it has not encountered before by pointing a hand at the new object of interest. An end-to-end neural network is used to attend to the novel object of interest indicated by the pointing hand and then to localize the object in new scenes. In order to attend to the novel object indicated by the pointing hand, we propose a spatial attention modulation mechanism that learns to focus on the highlighted object while ignoring the other objects in the scene. We show that a robot arm can manipulate novel objects that are highlighted by pointing a hand at them. We also evaluate the performance of the proposed architecture on a synthetic dataset constructed using emojis and on a real-world dataset of common objects.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117026223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1