首页 > 最新文献

2020 17th International Conference on Ubiquitous Robots (UR)最新文献

英文 中文
Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis 基于人类行为分析的表达自信/不自信的机器人行为设计
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144862
Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi
Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.
对话机器人已被积极研究。许多这样的机器人仅仅依靠口头信息。然而,人类的意图是通过语言信息和非语言信息来传达的。为了像人类一样传达意图,机器人有必要使用语言信息和非语言信息来表达意图。由于语音信息和头部动作信息是估计一个人的自信程度的有用特征,因此本文使用语音信息和头部动作信息来表达自信/不自信。首先,从8名参与者中收集了表达自信存在或不存在的人类行为。人类的行为由麦克风和摄像机记录下来。为了选择更容易理解的行为,我们用3个估计器对被试的行为进行置信水平估计。然后选择那些被认为行为更容易被理解的参与者的数据。所选择的行为被定义为具有代表性的语音特征和动作特征。基于代表性行为设计机器人行为。最后,通过实验对所设计的机器人行为进行了评价。机器人的行为由5名参与者估计。实验结果表明,3名被试基于代表性语音特征正确地估计了自信/不自信行为。自信与不自信的行为差异为回答前所用时间、声压有效值、语速。此外,3名被试根据回答前花的时间越长、头部转动越大的代表性动作特征,正确地估计了不自信行为。
{"title":"Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis","authors":"Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/UR49135.2020.9144862","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144862","url":null,"abstract":"Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123053038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System* 视觉噪声优化在线稳态运动视觉诱发电位系统的FPGA实现*
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144933
Yanjun Zhang, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Tao Xue, Xiaodong Zhang, Min Li, T. Tao
In order to improve the practicability of brain computer interface (BCI) system based on steady-state visual evoked potential (SSVEP), it is necessary to design BCI equipment with portability and low-cost. According to the principle of stochastic resonance (SR), the recognition accuracy of visual evoked potential could be improved by full-screen visual noise. Based on the above requirements, this paper proposed the usage of field programmable gate array (FPGA) to control stimulator through high definition multimedia interface (HDMI) for the display of steady-state motion visual evoked potential (SSMVEP) paradigm. By adding spatially localized visual noise to the motion-reversal checkerboard paradigm, the recognition accuracy is improved. According to the experimental results under different noise levels, the average recognition accuracies calculated with occipital electrodes O1, Oz, O2, PO3, POz and PO4 are 77.2%, 87.5%, and 85.2% corresponding to noise standard deviations values of 0, 24, and 40, respectively. In order to analyze the SR effect on the recognition accuracy with utilization of spatially localized visual noise, statistical analyses on the recognition accuracies under different noise intensities and different channel combinations are carried out. Results showed that the spatially localized visual noise could significantly improve the recognition accuracy and the stability of the proposed FPGA based online SSMVEP BCI system.
为了提高基于稳态视觉诱发电位(SSVEP)的脑机接口(BCI)系统的实用性,有必要设计便携、低成本的脑机接口设备。根据随机共振原理,采用全屏视觉噪声可以提高视觉诱发电位的识别精度。基于上述要求,本文提出了利用现场可编程门阵列(FPGA)通过高清多媒体接口(HDMI)控制刺激器实现稳态运动视觉诱发电位(SSMVEP)显示的范式。通过在运动反转棋盘模式中加入空间局部视觉噪声,提高了识别精度。根据不同噪声水平下的实验结果,枕电极O1、Oz、O2、PO3、POz和PO4在噪声标准差值为0、24和40时的平均识别准确率分别为77.2%、87.5%和85.2%。为了分析SR对空间定位视觉噪声识别精度的影响,对不同噪声强度和不同信道组合下的识别精度进行了统计分析。结果表明,空间局部化的视觉噪声能够显著提高基于FPGA的在线SSMVEP BCI系统的识别精度和稳定性。
{"title":"FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System*","authors":"Yanjun Zhang, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Tao Xue, Xiaodong Zhang, Min Li, T. Tao","doi":"10.1109/UR49135.2020.9144933","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144933","url":null,"abstract":"In order to improve the practicability of brain computer interface (BCI) system based on steady-state visual evoked potential (SSVEP), it is necessary to design BCI equipment with portability and low-cost. According to the principle of stochastic resonance (SR), the recognition accuracy of visual evoked potential could be improved by full-screen visual noise. Based on the above requirements, this paper proposed the usage of field programmable gate array (FPGA) to control stimulator through high definition multimedia interface (HDMI) for the display of steady-state motion visual evoked potential (SSMVEP) paradigm. By adding spatially localized visual noise to the motion-reversal checkerboard paradigm, the recognition accuracy is improved. According to the experimental results under different noise levels, the average recognition accuracies calculated with occipital electrodes O1, Oz, O2, PO3, POz and PO4 are 77.2%, 87.5%, and 85.2% corresponding to noise standard deviations values of 0, 24, and 40, respectively. In order to analyze the SR effect on the recognition accuracy with utilization of spatially localized visual noise, statistical analyses on the recognition accuracies under different noise intensities and different channel combinations are carried out. Results showed that the spatially localized visual noise could significantly improve the recognition accuracy and the stability of the proposed FPGA based online SSMVEP BCI system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Human-robot negotiation of intentions based on virtual fixtures for shared task execution 基于虚拟夹具的人机意图协商共享任务执行
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144859
Dong Wei, Hua Zhou, Huayong Yang
Robots are increasingly working side-by-side with human to fuse their complementary capabilities in cooperating with them for tasks in a wide range of applications, such as exoskeleton and industry or health-care. In order to promote natural interaction between humans and robots, the ability of humans to negotiate intentions through haptic channels has inspired a number of studies aimed at improving human-robot interaction performance. In this work, we propose a novel human-robot negotiation policy and introduce adaptive virtual fixture technology into traditional mechanisms to integrate bilateral intentions. In the policy, virtual fixtures are used to generate and adjust virtual paths while negotiation with human partners, speeding up people’s perception of robot task, making negotiation more efficient. Moreover, the path will adapt online to the estimated human intention, providing better solutions for both dyads while ensuring performance. The proposed strategy is verified in collaborative obstacle avoidance experiments.
机器人越来越多地与人类并肩工作,融合他们的互补能力,在广泛的应用中与他们合作,如外骨骼和工业或医疗保健。为了促进人与机器人之间的自然交互,人类通过触觉通道协商意图的能力激发了许多旨在改善人机交互性能的研究。在这项工作中,我们提出了一种新的人机协商策略,并将自适应虚拟夹具技术引入传统机制中以整合双边意图。该策略在与人类谈判时,利用虚拟夹具生成和调整虚拟路径,加快人们对机器人任务的感知,提高谈判效率。此外,该路径将在线适应估计的人类意图,在确保性能的同时为两对组合提供更好的解决方案。该策略在协同避障实验中得到了验证。
{"title":"Human-robot negotiation of intentions based on virtual fixtures for shared task execution","authors":"Dong Wei, Hua Zhou, Huayong Yang","doi":"10.1109/UR49135.2020.9144859","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144859","url":null,"abstract":"Robots are increasingly working side-by-side with human to fuse their complementary capabilities in cooperating with them for tasks in a wide range of applications, such as exoskeleton and industry or health-care. In order to promote natural interaction between humans and robots, the ability of humans to negotiate intentions through haptic channels has inspired a number of studies aimed at improving human-robot interaction performance. In this work, we propose a novel human-robot negotiation policy and introduce adaptive virtual fixture technology into traditional mechanisms to integrate bilateral intentions. In the policy, virtual fixtures are used to generate and adjust virtual paths while negotiation with human partners, speeding up people’s perception of robot task, making negotiation more efficient. Moreover, the path will adapt online to the estimated human intention, providing better solutions for both dyads while ensuring performance. The proposed strategy is verified in collaborative obstacle avoidance experiments.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trajectory Tracking of Robotic Manipulators with Constraints Based on Model Predictive Control 基于模型预测控制的约束机器人轨迹跟踪
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144943
Q. Tang, Zhugang Chu, Yu Qiang, Shun Wu, Zheng Zhou
This paper presents a model predictive control scheme for robotic manipulator in trajectory tracking in the presence of input constraints, which provides convergent tracking of reference trajectories and robustness to model mismatch. Firstly, the dynamic model of n-link robotic manipulator is linearized and discretized using Taylor approximation, based on which the constrained optimization question is converted to a quadratic programming problem. Then future output of system is predicted and the optimum control problem is solved online according to current state and previous input, while terminal constraint is included to reduce the tracking error. Finally, the convergence of the proposed control scheme is proved in simulation with the UR5 model and its robustness to model mismatch is verified by comparison with classical predictive control method.
针对存在输入约束的机器人轨迹跟踪问题,提出了一种模型预测控制方案,该方案具有参考轨迹的收敛跟踪和模型失配的鲁棒性。首先,利用泰勒近似对n连杆机器人动力学模型进行线性化和离散化,在此基础上将约束优化问题转化为二次规划问题;然后根据系统当前状态和前一个输入对系统未来输出进行预测,在线求解最优控制问题,同时加入终端约束以减小跟踪误差。最后,通过UR5模型的仿真验证了所提控制方案的收敛性,并与经典预测控制方法进行了比较,验证了所提控制方案对模型失配的鲁棒性。
{"title":"Trajectory Tracking of Robotic Manipulators with Constraints Based on Model Predictive Control","authors":"Q. Tang, Zhugang Chu, Yu Qiang, Shun Wu, Zheng Zhou","doi":"10.1109/UR49135.2020.9144943","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144943","url":null,"abstract":"This paper presents a model predictive control scheme for robotic manipulator in trajectory tracking in the presence of input constraints, which provides convergent tracking of reference trajectories and robustness to model mismatch. Firstly, the dynamic model of n-link robotic manipulator is linearized and discretized using Taylor approximation, based on which the constrained optimization question is converted to a quadratic programming problem. Then future output of system is predicted and the optimum control problem is solved online according to current state and previous input, while terminal constraint is included to reduce the tracking error. Finally, the convergence of the proposed control scheme is proved in simulation with the UR5 model and its robustness to model mismatch is verified by comparison with classical predictive control method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124249671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Recognition of Assembly Instructions Based on Geometric Feature and Text Recognition 基于几何特征和文本识别的装配指令识别
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144892
Jaewoo Park, Isaac Kang, Junhyeong Kwon, Eunji Lee, Yoonsik Kim, Sujeong You, S. Ji, N. Cho
Recent advances in machine learning methods have increased the performances of object detection and recognition systems. Accordingly, automatic understanding of assembly instructions in manuals in the form of electronic or paper materials has also become an issue in the research community. This task is quite challenging because it requires the automatic optical character recognition (OCR) and also the understanding of various mechanical parts and diverse assembly illustrations that are sometimes difficult to understand even for humans. Although deep networks are showing high performance in many computer vision tasks, it is still difficult to perform this task by an end-to-end deep neural network due to the lack of training data, and also because of diversity and ambiguity of illustrative instructions. Hence, in this paper, we propose to tackle this problem by using both conventional non-learning approaches and deep neural networks, considering the current state-of-the-arts. Precisely, we first extract components having strict geometric structures, such as characters and illustrations, by conventional non-learning algorithms, and then apply deep neural networks to recognize the extracted components. The main targets considered in this paper are the types and the numbers of connectors, and behavioral indicators such as circles, rectangles, and arrows for each cut in do-it-yourself (DIY) furniture assembly manuals. For these limited targets, we train a deep neural network to recognize them with high precision. Experiments show that our method works robustly in various types of furniture assembly instructions.
机器学习方法的最新进展提高了目标检测和识别系统的性能。因此,自动理解电子或纸质材料形式的手册中的装配说明也成为研究界的一个问题。这项任务是相当具有挑战性的,因为它需要自动光学字符识别(OCR),也需要理解各种机械零件和各种装配插图,有时甚至对人类来说也很难理解。尽管深度网络在许多计算机视觉任务中显示出高性能,但由于缺乏训练数据,并且由于说明性指令的多样性和模糊性,端到端深度神经网络仍然难以完成这些任务。因此,在本文中,我们建议通过使用传统的非学习方法和深度神经网络来解决这个问题,考虑到目前的最先进的技术。具体来说,我们首先通过传统的非学习算法提取具有严格几何结构的成分,如字符和插图,然后应用深度神经网络对提取的成分进行识别。本文考虑的主要目标是连接器的类型和数量,以及DIY家具组装手册中每个切割的圆形、矩形和箭头等行为指标。对于这些有限的目标,我们训练了一个深度神经网络来对它们进行高精度的识别。实验表明,该方法对各种类型的家具装配指令具有较强的鲁棒性。
{"title":"Recognition of Assembly Instructions Based on Geometric Feature and Text Recognition","authors":"Jaewoo Park, Isaac Kang, Junhyeong Kwon, Eunji Lee, Yoonsik Kim, Sujeong You, S. Ji, N. Cho","doi":"10.1109/UR49135.2020.9144892","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144892","url":null,"abstract":"Recent advances in machine learning methods have increased the performances of object detection and recognition systems. Accordingly, automatic understanding of assembly instructions in manuals in the form of electronic or paper materials has also become an issue in the research community. This task is quite challenging because it requires the automatic optical character recognition (OCR) and also the understanding of various mechanical parts and diverse assembly illustrations that are sometimes difficult to understand even for humans. Although deep networks are showing high performance in many computer vision tasks, it is still difficult to perform this task by an end-to-end deep neural network due to the lack of training data, and also because of diversity and ambiguity of illustrative instructions. Hence, in this paper, we propose to tackle this problem by using both conventional non-learning approaches and deep neural networks, considering the current state-of-the-arts. Precisely, we first extract components having strict geometric structures, such as characters and illustrations, by conventional non-learning algorithms, and then apply deep neural networks to recognize the extracted components. The main targets considered in this paper are the types and the numbers of connectors, and behavioral indicators such as circles, rectangles, and arrows for each cut in do-it-yourself (DIY) furniture assembly manuals. For these limited targets, we train a deep neural network to recognize them with high precision. Experiments show that our method works robustly in various types of furniture assembly instructions.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Implementation of a unified simulation for robot arm control with object detection based on ROS and Gazebo 基于ROS和Gazebo的机器人手臂控制与目标检测的统一仿真实现
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144984
Hyeonchul Jung, Min-Soo Kim, Yeheng Chen, H. Min, Taejoon Park
In this paper, we present a method to implement a robotic system with deep learning-based object detection in a simulation environment. The simulation environment is developed in Gazebo and run on Robot Operating System(ROS). ROS is a set of open-source software libraries that aims to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Gazebo is the convenient 3D simulator for use along with ROS. This paper introduces the steps to create a robot arm system controlled by ROS and object detection system using images from camera in Gazebo environment.
在本文中,我们提出了一种在仿真环境中实现基于深度学习的物体检测的机器人系统的方法。仿真环境在Gazebo中开发,运行在机器人操作系统(ROS)上。ROS是一组开源软件库,旨在简化在各种机器人平台上创建复杂而健壮的机器人行为的任务。Gazebo是与ROS一起使用的方便的3D模拟器。本文介绍了在Gazebo环境下,利用摄像机图像,建立由ROS和目标检测系统控制的机器人手臂系统的步骤。
{"title":"Implementation of a unified simulation for robot arm control with object detection based on ROS and Gazebo","authors":"Hyeonchul Jung, Min-Soo Kim, Yeheng Chen, H. Min, Taejoon Park","doi":"10.1109/UR49135.2020.9144984","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144984","url":null,"abstract":"In this paper, we present a method to implement a robotic system with deep learning-based object detection in a simulation environment. The simulation environment is developed in Gazebo and run on Robot Operating System(ROS). ROS is a set of open-source software libraries that aims to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Gazebo is the convenient 3D simulator for use along with ROS. This paper introduces the steps to create a robot arm system controlled by ROS and object detection system using images from camera in Gazebo environment.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125497316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm 基于纳什遗传算法的仿生翼型优化技术
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144868
Hamid Isakhani, C. Xiong, Shigang Yue, Wenbin Chen
Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77% improvement in its gliding ratio.
天生的飞行者滑翔,并尽量减少机翼关节,以保存能量,为持久和远程飞行。阐明这种能力的潜在生理学可能潜在地解决飞行工程中许多具有挑战性的问题。然而,生物启发研究的原始性质阻碍了这些成就,因此,为了绕过这些限制,本研究引入了一种基于PARSEC、纳什策略和遗传算法的生物启发非合作多目标优化方法,以实现昆虫级的空气动力学效率。所提出的技术在传统翼型以及低雷诺数的沙漠蝗虫(Schistocerca gregaria)的机翼横截面上进行了验证,我们记录了其滑翔比提高77%。
{"title":"A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm","authors":"Hamid Isakhani, C. Xiong, Shigang Yue, Wenbin Chen","doi":"10.1109/UR49135.2020.9144868","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144868","url":null,"abstract":"Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77% improvement in its gliding ratio.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129735147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Design and Implementation of Human Motion Capture System Based on CAN Bus * 基于CAN总线的人体运动捕捉系统的设计与实现
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144858
Xian Yue, Aibin Zhu, Jiyuan Song, Guangzhong Cao, Delin An, Zhifu Guo
Aiming at the existing problems of exoskeletons in human motion capture and recognition, causing human can only move passively in the exoskeleton, which is difficult to efficiently rebuild the connection between human muscles and nerves. This paper describes a system for the analysis of human motion capture based on CAN bus. The system adopts a distributed system architecture, which can collect data including plantar pressure, exoskeleton joint angle, and provide data support for subsequent motion recognition algorithm. The system has a simple measurement method and low dependence on measurement environment. The accuracy of the system is verified by the contrast experiment of Vicon 3D motion capture system. The results of the experiment indicate that the designed human motion capture system has a high accuracy, and meets the requirements of human motion perception when controlling the Rehabilitation Exoskeleton.
针对外骨骼在人体运动捕捉和识别中存在的问题,导致人体在外骨骼中只能被动地运动,难以有效地重建人体肌肉与神经之间的联系。介绍了一种基于CAN总线的人体动作捕捉分析系统。系统采用分布式系统架构,可以采集足底压力、外骨骼关节角度等数据,为后续的运动识别算法提供数据支持。该系统测量方法简单,对测量环境的依赖性低。通过Vicon三维运动捕捉系统的对比实验,验证了系统的准确性。实验结果表明,所设计的人体运动捕捉系统具有较高的精度,满足康复外骨骼控制时人体运动感知的要求。
{"title":"The Design and Implementation of Human Motion Capture System Based on CAN Bus *","authors":"Xian Yue, Aibin Zhu, Jiyuan Song, Guangzhong Cao, Delin An, Zhifu Guo","doi":"10.1109/UR49135.2020.9144858","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144858","url":null,"abstract":"Aiming at the existing problems of exoskeletons in human motion capture and recognition, causing human can only move passively in the exoskeleton, which is difficult to efficiently rebuild the connection between human muscles and nerves. This paper describes a system for the analysis of human motion capture based on CAN bus. The system adopts a distributed system architecture, which can collect data including plantar pressure, exoskeleton joint angle, and provide data support for subsequent motion recognition algorithm. The system has a simple measurement method and low dependence on measurement environment. The accuracy of the system is verified by the contrast experiment of Vicon 3D motion capture system. The results of the experiment indicate that the designed human motion capture system has a high accuracy, and meets the requirements of human motion perception when controlling the Rehabilitation Exoskeleton.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design and Control of a Piezoelectric Actuated Prostate Intervention Robotic System* 压电驱动前列腺介入机器人系统的设计与控制
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144768
Yuyang Lin, Yunlai Shi, Jun Zhang, Fugang Wang, Wenbo Wu, Haichao Sun
Robot-assisted prostate intervention under Magnetic resonance imaging (MRI) guidance is a promising method to improve the clinical performance compared with the manual method. An MRI-guided 6-DOF prostate intervention serial robot fully actuated by ultrasonic motors is designed and the control strategy is proposed. The mechanical design of the proposed robot is presented based on the design requirements of the prostate intervention robot. The binocular vision is adopted as the in-vitro needle tip measurement method and the robotic system combined with the binocular cameras are illustrated. Then the ultrasonic motor driving controller is designed. Finally, the position accuracy evaluation of the robot is carried out and the position error is about 1.898 mm which shows a good accuracy of the robot. The position tracking characteristics of the ultrasonic motor is presented where the maximum tracking error is under 7.5° which shows the efficiency of the driving controller design. The experiments indicate that the prostate intervention robot is feasible and shows good performance in the accuracy evaluation.
在磁共振成像(MRI)引导下,机器人辅助前列腺干预是一种较有前途的提高临床表现的方法。设计了一种全超声马达驱动的磁共振引导六自由度前列腺介入系列机器人,并提出了控制策略。根据前列腺介入机器人的设计要求,提出了该机器人的机械设计方案。采用双目视觉作为体外针尖测量方法,并对结合双目摄像机的机器人系统进行了说明。然后设计了超声波电机驱动控制器。最后对机器人进行了位置精度评价,位置误差约为1.898 mm,表明机器人具有较好的精度。给出了超声电机在最大跟踪误差小于7.5°时的位置跟踪特性,表明了驱动控制器设计的有效性。实验结果表明,该前列腺干预机器人是可行的,在准确性评估方面表现出良好的性能。
{"title":"Design and Control of a Piezoelectric Actuated Prostate Intervention Robotic System*","authors":"Yuyang Lin, Yunlai Shi, Jun Zhang, Fugang Wang, Wenbo Wu, Haichao Sun","doi":"10.1109/UR49135.2020.9144768","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144768","url":null,"abstract":"Robot-assisted prostate intervention under Magnetic resonance imaging (MRI) guidance is a promising method to improve the clinical performance compared with the manual method. An MRI-guided 6-DOF prostate intervention serial robot fully actuated by ultrasonic motors is designed and the control strategy is proposed. The mechanical design of the proposed robot is presented based on the design requirements of the prostate intervention robot. The binocular vision is adopted as the in-vitro needle tip measurement method and the robotic system combined with the binocular cameras are illustrated. Then the ultrasonic motor driving controller is designed. Finally, the position accuracy evaluation of the robot is carried out and the position error is about 1.898 mm which shows a good accuracy of the robot. The position tracking characteristics of the ultrasonic motor is presented where the maximum tracking error is under 7.5° which shows the efficiency of the driving controller design. The experiments indicate that the prostate intervention robot is feasible and shows good performance in the accuracy evaluation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116609077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
sEMG-based Static Force Estimation for Human-Robot Interaction using Deep Learning 基于表面肌电信号的深度学习人机交互静态力估计
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144869
Se Jin Kim, W. Chung, Keehoon Kim
Human-robot interaction (HRI) is a rapidly growing research area and it occurs in many applications including human-robot collaboration, human power augmentation, and rehabilitation robotics. As it is hard to exactly calculate intended motion trajectory, generally, interaction control is applied in HRI instead of pure motion control. To implement the interaction control, force information is necessary and force sensor is widely used in force feedback. However, force sensor has some limitations as 1) it is subject to breakdown, 2) it imposes additional volume and weight to the system, and 3) its applicable places are constrained. In this situation, force estimation can be a good solution. However, if force in static situation should be measured, using position and velocity is not sufficient because they are not influenced by the exerted force anymore. Therefore, we proposed sEMG-based static force estimation using deep learning. sEMG provides a useful information about human-exerting force because it reflects the human intention. Also, to extract the complex relationship between sEMG and force, deep learning approach is used. Experimental results show that when force with maximal value of 63.2 N is exerted, average force estimation error was 3.67 N. Also, the proposed method shows that force onset timing of estimated force is faster than force sensor signal. This result would be advantageous for faster human intention recognition.
人机交互(HRI)是一个快速发展的研究领域,在人机协作、人力增强和康复机器人等领域有着广泛的应用。由于预期运动轨迹难以精确计算,在HRI中一般采用交互控制而不是单纯的运动控制。为了实现交互控制,力信息是必不可少的,力传感器广泛应用于力反馈。然而,力传感器有一些局限性,1)它容易损坏,2)它会给系统带来额外的体积和重量,3)它的适用场所受到限制。在这种情况下,力估计是一个很好的解决方案。但是,如果要测量静态状态下的力,使用位置和速度是不够的,因为它们不再受施加的力的影响。因此,我们提出了使用深度学习的基于表面肌电信号的静态力估计。肌电图提供了关于人类施加力的有用信息,因为它反映了人类的意图。此外,为了提取表面肌电信号与力之间的复杂关系,采用了深度学习方法。实验结果表明,当施加最大力为63.2 N时,该方法的平均估计误差为3.67 N,并且估计力的开始时间比力传感器信号快。这一结果将有利于更快地进行人类意图识别。
{"title":"sEMG-based Static Force Estimation for Human-Robot Interaction using Deep Learning","authors":"Se Jin Kim, W. Chung, Keehoon Kim","doi":"10.1109/UR49135.2020.9144869","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144869","url":null,"abstract":"Human-robot interaction (HRI) is a rapidly growing research area and it occurs in many applications including human-robot collaboration, human power augmentation, and rehabilitation robotics. As it is hard to exactly calculate intended motion trajectory, generally, interaction control is applied in HRI instead of pure motion control. To implement the interaction control, force information is necessary and force sensor is widely used in force feedback. However, force sensor has some limitations as 1) it is subject to breakdown, 2) it imposes additional volume and weight to the system, and 3) its applicable places are constrained. In this situation, force estimation can be a good solution. However, if force in static situation should be measured, using position and velocity is not sufficient because they are not influenced by the exerted force anymore. Therefore, we proposed sEMG-based static force estimation using deep learning. sEMG provides a useful information about human-exerting force because it reflects the human intention. Also, to extract the complex relationship between sEMG and force, deep learning approach is used. Experimental results show that when force with maximal value of 63.2 N is exerted, average force estimation error was 3.67 N. Also, the proposed method shows that force onset timing of estimated force is faster than force sensor signal. This result would be advantageous for faster human intention recognition.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132458513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 17th International Conference on Ubiquitous Robots (UR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1