首页 > 最新文献

2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)最新文献

英文 中文
A paradigm of automatic ICT testing system development in practice ICT自动化测试系统开发的实践范例
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205770
Shuhao Liang
In-Circuit-Test (ICT) is an inevitable process for sustaining the quality of Printed Circuit Boards (PCBs) in the assembly and fabrication process. Applying automation to reduce labor and preventing errors in ICT has been studying by academics and industries for decades. Here we demonstrate a robot centric ICT testing system that integrates the peripheral equipment, also including the shop flow control system (SPCS). The graphic programming software – LabVIEW exploits to integrate robot arm, in-circuit test machine, PLC, HMI, and barcode reader. Communication among the facilities and error handling are the main challenges in the automated ICT system development. Heterogeneous communication protocols and third-party devices with unique syntax have caused some programming difficulties. The challenge of error handling is that it might be on hardware, software, or communication. Moreover, these errors may have occurred at 5-6 different facilities with chain effects. The robot arm dominates the main control sequence from the test start to finish. As a result, a steady automated ICT test system with real-time status monitoring has presented. That assists field personnel in eliminating problems quickly and promotes overall production line operation efficiency.
在印制电路板组装和制造过程中,电路测试是保证印制电路板质量的一个不可避免的过程。几十年来,学术界和产业界一直在研究如何利用自动化来减少劳动力和防止信息通信技术中的错误。在这里,我们展示了一个以机器人为中心的ICT测试系统,该系统集成了外围设备,也包括车间流程控制系统(SPCS)。利用图形编程软件LabVIEW实现了机械臂、在线试验机、PLC、人机界面和条码阅读器的集成。设备之间的通信和错误处理是自动化信息通信技术系统开发中的主要挑战。异构通信协议和具有独特语法的第三方设备造成了一些编程困难。错误处理的挑战在于它可能发生在硬件、软件或通信上。此外,这些错误可能发生在5-6个不同的设施,具有连锁效应。从测试开始到结束,机械臂主导着主要的控制程序。因此,提出了一种稳定的、具有实时状态监测功能的自动化ICT测试系统。这有助于现场人员快速消除问题,提高整个生产线的运行效率。
{"title":"A paradigm of automatic ICT testing system development in practice","authors":"Shuhao Liang","doi":"10.1109/ARIS50834.2020.9205770","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205770","url":null,"abstract":"In-Circuit-Test (ICT) is an inevitable process for sustaining the quality of Printed Circuit Boards (PCBs) in the assembly and fabrication process. Applying automation to reduce labor and preventing errors in ICT has been studying by academics and industries for decades. Here we demonstrate a robot centric ICT testing system that integrates the peripheral equipment, also including the shop flow control system (SPCS). The graphic programming software – LabVIEW exploits to integrate robot arm, in-circuit test machine, PLC, HMI, and barcode reader. Communication among the facilities and error handling are the main challenges in the automated ICT system development. Heterogeneous communication protocols and third-party devices with unique syntax have caused some programming difficulties. The challenge of error handling is that it might be on hardware, software, or communication. Moreover, these errors may have occurred at 5-6 different facilities with chain effects. The robot arm dominates the main control sequence from the test start to finish. As a result, a steady automated ICT test system with real-time status monitoring has presented. That assists field personnel in eliminating problems quickly and promotes overall production line operation efficiency.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123848489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Robotic Grasp Detection Technique by Integrating YOLO and Grasp Detection Deep Neural Networks * 基于YOLO和抓取检测深度神经网络的机器人抓取检测技术*
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205791
J. Yang, Ui-Kai Chen, Kai-Chu Chang, Ying-Jen Chen
This paper proposes a robotic grasp detection technique by integrating you only look once (YOLO) deep neural network (DNN) and a grasp detection DNN. In this world, there are many people who cannot move their own bodies. The reason may be an accident or physical deterioration. So we need to invest more human resources to assist their lives. With new technological advances, robots are gradually able to perfectly replicate human movements. Hence, we intend to design a remote-control fetching robot. The system combines internet of things (IoT) technology, and users can use intelligent devices to control this robot with robotic arm to get the items they want. This paper focus on detecting the grasp of robotic arm by integrating YOLO and grasp detection DNNs. At first, YOLO V-v3 is applied to achieve object detection. Then a robotic grasp detection DNN is proposed to detect the robotic grasp. After that, the point cloud information of this object is utilized to calculate the normal vector of the grasp position such that the robotic arm can fetch the target along the normal vector. Finally, experiment results are given to show the practicality of the proposed robotic grasp detection Technique.
本文提出了一种将you only look once (YOLO)深度神经网络(DNN)与抓取检测深度神经网络相结合的机器人抓取检测技术。在这个世界上,有很多人无法移动自己的身体。原因可能是意外事故或身体恶化。所以我们需要投入更多的人力资源来帮助他们的生活。随着新技术的进步,机器人逐渐能够完美地复制人类的动作。因此,我们打算设计一个遥控抓取机器人。该系统结合了物联网(IoT)技术,用户可以使用智能设备控制机器人的机械臂,以获得他们想要的物品。本文将YOLO和抓取检测dnn相结合,对机械臂抓取进行检测。首先使用YOLO V-v3实现目标检测。然后提出了一种机器人抓取检测深度神经网络来检测机器人抓取。然后利用该目标的点云信息计算抓取位置的法向量,使机械臂沿着法向量获取目标。最后给出了实验结果,验证了所提机器人抓取检测技术的实用性。
{"title":"A Novel Robotic Grasp Detection Technique by Integrating YOLO and Grasp Detection Deep Neural Networks *","authors":"J. Yang, Ui-Kai Chen, Kai-Chu Chang, Ying-Jen Chen","doi":"10.1109/ARIS50834.2020.9205791","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205791","url":null,"abstract":"This paper proposes a robotic grasp detection technique by integrating you only look once (YOLO) deep neural network (DNN) and a grasp detection DNN. In this world, there are many people who cannot move their own bodies. The reason may be an accident or physical deterioration. So we need to invest more human resources to assist their lives. With new technological advances, robots are gradually able to perfectly replicate human movements. Hence, we intend to design a remote-control fetching robot. The system combines internet of things (IoT) technology, and users can use intelligent devices to control this robot with robotic arm to get the items they want. This paper focus on detecting the grasp of robotic arm by integrating YOLO and grasp detection DNNs. At first, YOLO V-v3 is applied to achieve object detection. Then a robotic grasp detection DNN is proposed to detect the robotic grasp. After that, the point cloud information of this object is utilized to calculate the normal vector of the grasp position such that the robotic arm can fetch the target along the normal vector. Finally, experiment results are given to show the practicality of the proposed robotic grasp detection Technique.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124000129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Design of Continuous-Time Sigma-Delta Modulator with Noise Reduction for Robotic Light Communication and Sensing 机器人光通信与传感用降噪连续σ - δ调制器的设计
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205776
W. Lai
The proposed continuous-time sigma-delta $(SigmaDelta)$ modulator employing nonreturn-to-zero (NRZ) digital-to-analog converter (DAC) and pulse shaping to achieve the performance of reducing the impact of clock jitter noise reduction is presented. The proposed modulator comprises a third order RC operational-amplifier-based loop filter, 4-bit internal quantizer operating at 160 MHz and three DACs. The NRZ DAC with quantizer excess loop delay compensation is set to be half the sampling period of the quantizer. The $SigmaDelta$ modulator dissipates 10.1 mW at 1.2 V supply voltage is implemented in the TSMC 0.18 um CMOS technology for robotic light communication and intelligent sensor fusion. Measured results illustrate that the $SigmaDelta$ modulator achieves 66.9 dB SNR, a peak 62 dB SNDR and 10.3 ENOB over a 10 MHz band at an over-sampling ratio (OSR) of 8. Including pads, the chip dimension is $0.363mm^{2}.$
提出了一种采用非归零(NRZ)数模转换器(DAC)和脉冲整形的连续时间sigma-delta $(SigmaDelta)$调制器,以达到降低时钟抖动降噪影响的性能。所提出的调制器包括一个基于三阶RC运算放大器的环路滤波器、工作频率为160 MHz的4位内部量化器和三个dac。采用量化器补偿多余环路延迟的NRZ DAC设置为量化器采样周期的一半。$SigmaDelta$调制器在1.2 V电源电压下耗电10.1 mW,采用台积电0.18 um CMOS技术,用于机器人光通信和智能传感器融合。测量结果表明,$SigmaDelta$调制器在过采样比(OSR)为8的情况下,在10 MHz频段内实现了66.9 dB信噪比,峰值62 dB SNDR和10.3 ENOB。包括衬垫在内,芯片尺寸为 $0.363mm^{2}.$
{"title":"Design of Continuous-Time Sigma-Delta Modulator with Noise Reduction for Robotic Light Communication and Sensing","authors":"W. Lai","doi":"10.1109/ARIS50834.2020.9205776","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205776","url":null,"abstract":"The proposed continuous-time sigma-delta $(SigmaDelta)$ modulator employing nonreturn-to-zero (NRZ) digital-to-analog converter (DAC) and pulse shaping to achieve the performance of reducing the impact of clock jitter noise reduction is presented. The proposed modulator comprises a third order RC operational-amplifier-based loop filter, 4-bit internal quantizer operating at 160 MHz and three DACs. The NRZ DAC with quantizer excess loop delay compensation is set to be half the sampling period of the quantizer. The $SigmaDelta$ modulator dissipates 10.1 mW at 1.2 V supply voltage is implemented in the TSMC 0.18 um CMOS technology for robotic light communication and intelligent sensor fusion. Measured results illustrate that the $SigmaDelta$ modulator achieves 66.9 dB SNR, a peak 62 dB SNDR and 10.3 ENOB over a 10 MHz band at an over-sampling ratio (OSR) of 8. Including pads, the chip dimension is $0.363mm^{2}.$","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116888653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model Predictive Control with Laguerre Function based on Social Ski Driver Algorithm for Autonomous Vehicle 基于社会滑雪驾驶员算法的自动驾驶汽车拉盖尔函数模型预测控制
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205782
M. Elsisi
The steering control of the autonomous vehicles represents an avital issue in the vehicular system. The model predictive control was proved as an effective controller. However, the representation of the model predictive control (MPC) by a large prediction horizon and control horizon requires a large number of parameters and it is complicated. Discrete-time Laguerre functions can cope with this issue to represent the MPC with few parameters. Whilst, the Laguerre functions require a proper tuning for its parameters in order to provide a good response with MPC. This paper introduces a new design method to tune the parameters of the MPC with the Laguerre function by a new artificial intelligence (AI) technique named social ski driver algorithm (SSDA). The proposed MPC based on the SSDA is applied to adjust the steering angle of an autonomous vehicle including vision dynamics. Further test scenarios are created to ensure the effectiveness of the proposed control to cope with the variations of road curvatures.
自动驾驶汽车的转向控制是车辆系统中的一个关键问题。模型预测控制被证明是一种有效的控制器。然而,模型预测控制(MPC)的大预测水平和控制水平的表示需要大量的参数,且比较复杂。离散时间拉盖尔函数可以解决这一问题,用较少的参数表示MPC。同时,拉盖尔函数需要对其参数进行适当的调整,以便在MPC下提供良好的响应。本文介绍了一种新的人工智能技术——社会滑雪驾驶员算法(social ski driver algorithm, SSDA),利用拉盖尔函数对MPC参数进行调优的设计方法。将基于SSDA的MPC应用于自动驾驶汽车的转向角度调整,包括视觉动态。进一步的测试场景创建,以确保所提出的控制的有效性,以应对道路曲率的变化。
{"title":"Model Predictive Control with Laguerre Function based on Social Ski Driver Algorithm for Autonomous Vehicle","authors":"M. Elsisi","doi":"10.1109/ARIS50834.2020.9205782","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205782","url":null,"abstract":"The steering control of the autonomous vehicles represents an avital issue in the vehicular system. The model predictive control was proved as an effective controller. However, the representation of the model predictive control (MPC) by a large prediction horizon and control horizon requires a large number of parameters and it is complicated. Discrete-time Laguerre functions can cope with this issue to represent the MPC with few parameters. Whilst, the Laguerre functions require a proper tuning for its parameters in order to provide a good response with MPC. This paper introduces a new design method to tune the parameters of the MPC with the Laguerre function by a new artificial intelligence (AI) technique named social ski driver algorithm (SSDA). The proposed MPC based on the SSDA is applied to adjust the steering angle of an autonomous vehicle including vision dynamics. Further test scenarios are created to ensure the effectiveness of the proposed control to cope with the variations of road curvatures.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115507544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implementation of the Camera-based Approach to Guide the Robot with Minimization Movements 基于摄像机的机器人最小运动引导方法的实现
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205771
Hsin-Hsiung Huang, Juing-Huei Su, Chyi-Shyong Lee, Hsuan-Hao Li
In this paper, we implement the camera-based approach to identify the image and take the information to guide the minimization movement of the robot with the given starting and target points. The advantage of the paper are as follows, First, the camera is used to guide the robot by the wireless transmission. Second, the inner product-based approach is applied to calculated the distance between two given points. Third, we minimize the error between the calculated distance and the physical distance. Hence, the approach is accurately to guide the robot to move the target by the camera. Experimental results show that the camera-based approach can accurately guide the robot to the target with the minimization movement, which leads the power saving of the battery.
在本文中,我们实现了基于摄像机的方法来识别图像,并利用这些信息来指导机器人在给定的起点和目标点上进行最小化运动。本文的优点是:首先,利用摄像机通过无线传输来引导机器人。其次,采用基于内积的方法计算给定两点之间的距离。第三,我们将计算距离与物理距离之间的误差最小化。因此,该方法可以准确地通过摄像机引导机器人移动目标。实验结果表明,基于摄像机的方法可以精确地引导机器人以最小的运动到达目标,从而节省电池的电量。
{"title":"Implementation of the Camera-based Approach to Guide the Robot with Minimization Movements","authors":"Hsin-Hsiung Huang, Juing-Huei Su, Chyi-Shyong Lee, Hsuan-Hao Li","doi":"10.1109/ARIS50834.2020.9205771","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205771","url":null,"abstract":"In this paper, we implement the camera-based approach to identify the image and take the information to guide the minimization movement of the robot with the given starting and target points. The advantage of the paper are as follows, First, the camera is used to guide the robot by the wireless transmission. Second, the inner product-based approach is applied to calculated the distance between two given points. Third, we minimize the error between the calculated distance and the physical distance. Hence, the approach is accurately to guide the robot to move the target by the camera. Experimental results show that the camera-based approach can accurately guide the robot to the target with the minimization movement, which leads the power saving of the battery.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127124981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of 1V CMOS 5.8 GHz VCO with Switched Capacitor Array Tuning for Intelligent Sensor Fusion 基于开关电容阵列调谐的1V CMOS 5.8 GHz VCO智能传感器融合设计
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205785
W. Lai
The article proposes wide tuning voltage-controlled oscillator (VCO) with adopting 4-bit switched capacitor array (SCA). The SCA with a cross-coupled switching pair, varactors and LC circuit at a low supply voltage of 1 V was fabricated in the 0.18-$mu$ m 1P6M CMOS technology for intelligent sensor fusion. Measured results illustrate that at the voltage source of 1 V, the SCA VCO is tunable from 4.47 GHz to 5.95 GHz, corresponding to 28.7 %. The phase noise is -115.8dBc/Hz at 1 MHz offset from 5.8 GHz, the tuning range is 1880 MHz, the FOM is -182.7dBc/Hz, the power consumption is 7.0 mW and the chipset dimension is $0.817times 0.599mm^{2}$
本文提出了采用4位开关电容阵列(SCA)的宽调谐压控振荡器(VCO)。采用0.18-$mu$ m 1P6M CMOS技术,制作了具有交叉耦合开关对、变容管和LC电路的低电源电压为1 V的SCA,用于智能传感器融合。测量结果表明,在1 V电压源下,SCA压控振荡器在4.47 GHz ~ 5.95 GHz范围内可调,对应于28.7%。相位噪声为-115.8dBc/Hz,调谐范围为1880 MHz, FOM为-182.7dBc/Hz,功耗为7.0 mW,芯片组尺寸为0.817 × 0.599mm^{2}$
{"title":"Design of 1V CMOS 5.8 GHz VCO with Switched Capacitor Array Tuning for Intelligent Sensor Fusion","authors":"W. Lai","doi":"10.1109/ARIS50834.2020.9205785","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205785","url":null,"abstract":"The article proposes wide tuning voltage-controlled oscillator (VCO) with adopting 4-bit switched capacitor array (SCA). The SCA with a cross-coupled switching pair, varactors and LC circuit at a low supply voltage of 1 V was fabricated in the 0.18-$mu$ m 1P6M CMOS technology for intelligent sensor fusion. Measured results illustrate that at the voltage source of 1 V, the SCA VCO is tunable from 4.47 GHz to 5.95 GHz, corresponding to 28.7 %. The phase noise is -115.8dBc/Hz at 1 MHz offset from 5.8 GHz, the tuning range is 1880 MHz, the FOM is -182.7dBc/Hz, the power consumption is 7.0 mW and the chipset dimension is $0.817times 0.599mm^{2}$","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formation Control for Mobile Robot using Fuzzy - PI Controller 基于模糊PI控制器的移动机器人编队控制
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205789
Min-Fan Ricky Lee, H.P.M Willybrordus, Sukamto, Sharfiden Hassen, Asep Nugroho
Protecting soldiers in the long march of platoon formation is crucial mission in the military operation. Autonomous ground mobile robots can be deployed to carry out such kind of the mission. The primary task is maintaining robot position automatically based on the movement of soldier in a platoon formation. The GPS is employed to determine the soldier current pose. This information is used as an input to create a dynamic convex hull around a platoon. The Proportional-Integral (PI) controller is applied to control each robot so it can move in a desired trajectory. The fuzzy logic control (FLC) is involved to tune the gain of PI controller to optimize the performance. Three protective robots and nine soldiers are used to evaluate the algorithm in simulation. The proposed algorithm will provide a platoon soldiers with optimal protection encirclement and enhance their safety. The simulation results show good performance using the proposed controller.
保护士兵在排队长征中的安全是军事行动中的一项重要任务。可以部署自主地面移动机器人来执行此类任务。主要任务是根据排队形中士兵的运动自动保持机器人的位置。GPS被用来确定士兵当前的姿势。这个信息被用作一个输入,在排周围创建一个动态凸包。采用比例积分(PI)控制器控制每个机器人,使其在期望的轨迹上运动。采用模糊逻辑控制(FLC)对PI控制器的增益进行调节,使其性能达到最佳。用3个防护机器人和9名士兵在仿真中对算法进行了评估。该算法将为排兵提供最优保护包围圈,提高排兵的安全性。仿真结果表明,所设计的控制器具有良好的控制性能。
{"title":"Formation Control for Mobile Robot using Fuzzy - PI Controller","authors":"Min-Fan Ricky Lee, H.P.M Willybrordus, Sukamto, Sharfiden Hassen, Asep Nugroho","doi":"10.1109/ARIS50834.2020.9205789","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205789","url":null,"abstract":"Protecting soldiers in the long march of platoon formation is crucial mission in the military operation. Autonomous ground mobile robots can be deployed to carry out such kind of the mission. The primary task is maintaining robot position automatically based on the movement of soldier in a platoon formation. The GPS is employed to determine the soldier current pose. This information is used as an input to create a dynamic convex hull around a platoon. The Proportional-Integral (PI) controller is applied to control each robot so it can move in a desired trajectory. The fuzzy logic control (FLC) is involved to tune the gain of PI controller to optimize the performance. Three protective robots and nine soldiers are used to evaluate the algorithm in simulation. The proposed algorithm will provide a platoon soldiers with optimal protection encirclement and enhance their safety. The simulation results show good performance using the proposed controller.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"494 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123352341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Detection using Transfer Learning for Underwater Robot 基于迁移学习的水下机器人目标检测
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205774
Chia-Chin Wang, H. Samani
In this paper the usage of Transfer Learning method for object detection in underwater environment is experienced and evaluated. Deep learning method of YOLO is utilized for detection of different types of fish underwater. A ROV equipped with camera is employed for video streaming underwater and the data has been analyzed on the main computer Our experimental results confirmed improvement in the mAP by 4% using transfer learning.
本文对迁移学习方法在水下目标检测中的应用进行了体验和评价。利用YOLO的深度学习方法对水下不同类型的鱼类进行检测。采用带摄像头的ROV进行水下视频流传输,并在主计算机上对数据进行了分析。实验结果表明,采用迁移学习技术,mAP的性能提高了4%。
{"title":"Object Detection using Transfer Learning for Underwater Robot","authors":"Chia-Chin Wang, H. Samani","doi":"10.1109/ARIS50834.2020.9205774","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205774","url":null,"abstract":"In this paper the usage of Transfer Learning method for object detection in underwater environment is experienced and evaluated. Deep learning method of YOLO is utilized for detection of different types of fish underwater. A ROV equipped with camera is employed for video streaming underwater and the data has been analyzed on the main computer Our experimental results confirmed improvement in the mAP by 4% using transfer learning.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123631173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Intelligent Robot for Worker Safety Surveillance: Deep Learning Perception and Visual Navigation 用于工人安全监控的智能机器人:深度学习感知和视觉导航
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205772
Min-Fan Ricky Lee, Tzu-Wei Chien
The fatal injury rate for the construction industry is higher than the average for all industries. Recently, researchers have shown an increased interest in occupational safety in the construction industry. However, all the current methods using conventional machine learning with stationary cameras suffer from some severe limitations, perceptual aliasing (e.g., different places/objects can appear identical), occlusion (e.g., place/object appearance changes between visits), seasonal / illumination changes, significant viewpoint changes, etc. This paper proposes a perception module using end-to-end deep-learning and visual SLAM (Simultaneous Localization and Mapping) for an effective and efficient object recognition and navigation using a differential-drive mobile robot. Various deep-learning frameworks and visual navigation strategies with evaluation metrics are implemented and validated for the selection of the best model. The deep-learning model's predictions are evaluated via the metrics (model speed, accuracy, complexity, precision, recall, P-R curve, F1 score). The YOLOv3 shows the best trade-off among all algorithms, 57.9% mean average precision (mAP), in real-world settings, and can process 45 frames per second (FPS) on NVIDIA Jetson TX2 which makes it suitable for real-time detection, as well as a right candidate for deploying the neural network on a mobile robot. The evaluation metrics used for the comparison of laser SLAM are Root Mean Square Error (RMSE). The Google Cartographer SLAM shows the lowest RMSE and acceptable processing time. The experimental results demonstrate that the perception module can meet the requirements of head protection criteria in Occupational Safety and Health Administration (OSHA) standards for construction. To be more precise, this module can effectively detect construction worker's non-hardhat-use in different construction site conditions and can facilitate improved safety inspection and supervision.
建筑业的致命伤害率高于所有行业的平均水平。最近,研究人员对建筑行业的职业安全表现出越来越大的兴趣。然而,目前所有使用固定相机的传统机器学习方法都存在一些严重的局限性,如感知混叠(例如,不同的地方/物体可能看起来相同)、遮挡(例如,两次访问之间的地方/物体外观变化)、季节/光照变化、重大的视点变化等。本文提出了一种基于端到端深度学习和视觉SLAM (Simultaneous Localization and Mapping)的感知模块,用于差分驱动移动机器人的有效和高效的目标识别和导航。实现并验证了各种深度学习框架和带有评估指标的视觉导航策略,以选择最佳模型。深度学习模型的预测通过指标(模型速度、准确性、复杂性、精度、召回率、P-R曲线、F1分数)进行评估。YOLOv3在所有算法中表现出最好的折衷,在现实环境中平均精度(mAP)为57.9%,并且可以在NVIDIA Jetson TX2上每秒处理45帧(FPS),这使得它适合于实时检测,并且是在移动机器人上部署神经网络的合适候选人。比较激光SLAM的评价指标为均方根误差(RMSE)。b谷歌Cartographer SLAM显示最低RMSE和可接受的处理时间。实验结果表明,感知模块能够满足OSHA (Occupational Safety and Health Administration,职业安全与健康管理局)建筑标准中头部保护标准的要求。更准确地说,该模块可以有效地检测建筑工人在不同施工现场条件下的不戴安全帽情况,便于改进安全检查和监督。
{"title":"Intelligent Robot for Worker Safety Surveillance: Deep Learning Perception and Visual Navigation","authors":"Min-Fan Ricky Lee, Tzu-Wei Chien","doi":"10.1109/ARIS50834.2020.9205772","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205772","url":null,"abstract":"The fatal injury rate for the construction industry is higher than the average for all industries. Recently, researchers have shown an increased interest in occupational safety in the construction industry. However, all the current methods using conventional machine learning with stationary cameras suffer from some severe limitations, perceptual aliasing (e.g., different places/objects can appear identical), occlusion (e.g., place/object appearance changes between visits), seasonal / illumination changes, significant viewpoint changes, etc. This paper proposes a perception module using end-to-end deep-learning and visual SLAM (Simultaneous Localization and Mapping) for an effective and efficient object recognition and navigation using a differential-drive mobile robot. Various deep-learning frameworks and visual navigation strategies with evaluation metrics are implemented and validated for the selection of the best model. The deep-learning model's predictions are evaluated via the metrics (model speed, accuracy, complexity, precision, recall, P-R curve, F1 score). The YOLOv3 shows the best trade-off among all algorithms, 57.9% mean average precision (mAP), in real-world settings, and can process 45 frames per second (FPS) on NVIDIA Jetson TX2 which makes it suitable for real-time detection, as well as a right candidate for deploying the neural network on a mobile robot. The evaluation metrics used for the comparison of laser SLAM are Root Mean Square Error (RMSE). The Google Cartographer SLAM shows the lowest RMSE and acceptable processing time. The experimental results demonstrate that the perception module can meet the requirements of head protection criteria in Occupational Safety and Health Administration (OSHA) standards for construction. To be more precise, this module can effectively detect construction worker's non-hardhat-use in different construction site conditions and can facilitate improved safety inspection and supervision.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121709562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Autonomous Pose Correction and Landing System for Unmanned Aerial Vehicles 无人机自主姿态校正与着陆系统
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205790
Min-Fan Ricky Lee, S. K., A. J.
The landing is one of the most dangerous maneuvers in the entirety of the flight phase of an Unmanned Aerial Vehicle (UAVs). Sudden changes in the environment cause issues regarding the stability of the drone, which poses a difficult challenge in landing the UAV precisely. To better the safety of any UAVs flying in urban areas, UAVs should be landed carefully, in a GPS-denied or network-disconnected environment, by using vision and inertial data. This paper presents UAV safe landing system which comprises of three sub-systems for detection of designated landing sites and autonomous pose correction, landing site inspection and landing flight control. This paper deals with vision-based target detection and pose correction system in-depth. The airborne vision system is utilized to recognize certain markers on the landing site. The information from the onboard visual sensors and Inertial Measurement Unit (IMU) is utilized to control and land UAV in a perfect landing trajectory, on a precise location. A series of experiments have been outlined to test and optimize the proposed method using Parrot AR.Drone 2.0.
着陆是无人机在整个飞行阶段中最危险的动作之一。环境的突然变化会导致无人机的稳定性问题,这对无人机的精确着陆提出了困难的挑战。为了提高任何无人机在城市地区飞行的安全性,无人机应该在gps拒绝或网络断开的环境中,通过使用视觉和惯性数据小心着陆。提出了一种由指定着陆点探测和自主姿态校正、着陆点检测和着陆飞行控制三个子系统组成的无人机安全着陆系统。本文对基于视觉的目标检测与姿态校正系统进行了深入的研究。机载视觉系统用于识别着陆点上的某些标记。来自机载视觉传感器和惯性测量单元(IMU)的信息被用来控制和使无人机在一个完美的着陆轨迹上,在一个精确的位置上着陆。本文采用Parrot AR.Drone 2.0进行了一系列实验,对所提出的方法进行了测试和优化。
{"title":"Autonomous Pose Correction and Landing System for Unmanned Aerial Vehicles","authors":"Min-Fan Ricky Lee, S. K., A. J.","doi":"10.1109/ARIS50834.2020.9205790","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205790","url":null,"abstract":"The landing is one of the most dangerous maneuvers in the entirety of the flight phase of an Unmanned Aerial Vehicle (UAVs). Sudden changes in the environment cause issues regarding the stability of the drone, which poses a difficult challenge in landing the UAV precisely. To better the safety of any UAVs flying in urban areas, UAVs should be landed carefully, in a GPS-denied or network-disconnected environment, by using vision and inertial data. This paper presents UAV safe landing system which comprises of three sub-systems for detection of designated landing sites and autonomous pose correction, landing site inspection and landing flight control. This paper deals with vision-based target detection and pose correction system in-depth. The airborne vision system is utilized to recognize certain markers on the landing site. The information from the onboard visual sensors and Inertial Measurement Unit (IMU) is utilized to control and land UAV in a perfect landing trajectory, on a precise location. A series of experiments have been outlined to test and optimize the proposed method using Parrot AR.Drone 2.0.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132054450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1