Self-adaptive Cobots in Cyber-Physical Production Systems

Roberto Nogueira, João C. P. Reis, R. Pinto, G. Gonçalves
{"title":"Self-adaptive Cobots in Cyber-Physical Production Systems","authors":"Roberto Nogueira, João C. P. Reis, R. Pinto, G. Gonçalves","doi":"10.1109/ETFA.2019.8869165","DOIUrl":null,"url":null,"abstract":"Absolute automation in certain industries, such as the automotive industry, has proven to be disadvantageous. Robots are fairly capable when performing tasks that are repetitive and demand precision. However, a hybrid solution comprised of the adaptability and resourcefulness of humans cooperating, in the same task, with the precision and efficiency of machines is the next step for automation. Manipulators, however, lack self-adaptability and true collaborative behaviour. And so, through the integration of vision systems, manipulators can perceive their environment and also understand complex interactions. In this paper, a vision-based collaborative proof-of-concept framework is proposed using the Kinect v2, a UR5 robotic manipulator and MATLAB. This framework implements 3 behavioural modes, 1) a Self-Adaptive mode for obstacle detection and avoidance, 2) a Collaborative mode for physical human-robot interaction and 3) a standby Safe mode. These modes are activated with recourse to gestures, by virtue of the body tracking and gesture recognition algorithm of the Kinect v2. Additionally, to allow self-recognition of the robot, the Region Growing segmentation is combined with the UR5’s Forward Kinematics for precise, near real-time segmentation. Furthermore, self-adaptive reactive behaviour is implemented by using artificial repulsive action for the manipulator’s end-effector. Reaction times were tested for all three modes, being that Collaborative and Safe mode would take up to 5 seconds to accomplish the movement, while Self-Adaptive mode could take up to 10 seconds between reactions.","PeriodicalId":6682,"journal":{"name":"2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)","volume":"44 1","pages":"521-528"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ETFA.2019.8869165","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Absolute automation in certain industries, such as the automotive industry, has proven to be disadvantageous. Robots are fairly capable when performing tasks that are repetitive and demand precision. However, a hybrid solution comprised of the adaptability and resourcefulness of humans cooperating, in the same task, with the precision and efficiency of machines is the next step for automation. Manipulators, however, lack self-adaptability and true collaborative behaviour. And so, through the integration of vision systems, manipulators can perceive their environment and also understand complex interactions. In this paper, a vision-based collaborative proof-of-concept framework is proposed using the Kinect v2, a UR5 robotic manipulator and MATLAB. This framework implements 3 behavioural modes, 1) a Self-Adaptive mode for obstacle detection and avoidance, 2) a Collaborative mode for physical human-robot interaction and 3) a standby Safe mode. These modes are activated with recourse to gestures, by virtue of the body tracking and gesture recognition algorithm of the Kinect v2. Additionally, to allow self-recognition of the robot, the Region Growing segmentation is combined with the UR5’s Forward Kinematics for precise, near real-time segmentation. Furthermore, self-adaptive reactive behaviour is implemented by using artificial repulsive action for the manipulator’s end-effector. Reaction times were tested for all three modes, being that Collaborative and Safe mode would take up to 5 seconds to accomplish the movement, while Self-Adaptive mode could take up to 10 seconds between reactions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
网络物理生产系统中的自适应协作机器人
在某些行业,如汽车行业,绝对自动化已被证明是不利的。机器人在执行重复性和精确度要求很高的任务时相当有能力。然而,一个混合的解决方案,包括人类的适应性和智谋合作,在相同的任务,与机器的精度和效率是自动化的下一步。然而,操纵者缺乏自我适应性和真正的合作行为。因此,通过视觉系统的整合,机器人可以感知周围的环境并理解复杂的相互作用。在本文中,提出了一个基于视觉的协作概念验证框架,使用Kinect v2, UR5机器人机械手和MATLAB。该框架实现了3种行为模式,1)用于障碍物检测和回避的自适应模式,2)用于人机物理交互的协作模式,以及3)备用安全模式。借助Kinect v2的身体追踪和手势识别算法,这些模式可以通过手势来激活。此外,为了实现机器人的自我识别,区域增长分割与UR5的正运动学相结合,实现了精确、接近实时的分割。此外,通过对机械手末端执行器施加人工排斥作用来实现自适应反应行为。我们测试了所有三种模式的反应时间,即协作和安全模式需要5秒才能完成移动,而自适应模式的反应间隔可能需要10秒。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
An analysis of energy storage system interaction in a multi objective model predictive control based energy management in DC microgrid Latency-Based 5G RAN Slicing Descriptor to Support Deterministic Industry 4.0 Applications CAP: Context-Aware Programming for Cyber Physical Systems Multiplexing Avionics and additional flows on a QoS-aware AFDX network Ultra-Reliable Low Latency based on Retransmission and Spatial Diversity in Slowly Fading Channels with Co-channel Interference
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1