首页 > 最新文献

2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)最新文献

英文 中文
Synchronous Dual-Arm Manipulation by Adult-Sized Humanoid Robot 成人人形机器人的同步双臂操纵
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205783
Hanjaya Mandala, Saeed Saeedvand, J. Baltes
This paper introduces a synchronous dual-arm manipulation with obstacle avoidance trajectory planning by an adult-size humanoid robot. In this regard, we propose a high precision 3D object coordinate tracking using LiDAR point cloud data and adopting Gaussian distribution into robot manipulation trajectory planning. We derived our 3D object detection into three methods included auto K-means clustering, deep learning object classification, and convex hull localization. Therefore, a lightweight 3D object classification based on a convolutional neural network (CNN) has been proposed that reached 91% accuracy with 0.34ms inference time on CPU. In empirical experiments, the Gaussian manipulation trajectory planning is applied adult-sized dual-arm robot, which shows efficient object placement with obstacle avoidance.
本文介绍了一种成人人形机器人双臂同步避障轨迹规划方法。为此,我们提出了利用LiDAR点云数据进行高精度三维目标坐标跟踪,并将高斯分布引入机器人操作轨迹规划中。我们将3D目标检测分为三种方法:自动k均值聚类、深度学习目标分类和凸包定位。因此,本文提出了一种基于卷积神经网络(CNN)的轻量级3D物体分类方法,该方法在CPU上的推理时间为0.34ms,准确率达到91%。在实证实验中,将高斯操纵轨迹规划应用于成人双臂机器人,显示出有效的避障目标放置。
{"title":"Synchronous Dual-Arm Manipulation by Adult-Sized Humanoid Robot","authors":"Hanjaya Mandala, Saeed Saeedvand, J. Baltes","doi":"10.1109/ARIS50834.2020.9205783","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205783","url":null,"abstract":"This paper introduces a synchronous dual-arm manipulation with obstacle avoidance trajectory planning by an adult-size humanoid robot. In this regard, we propose a high precision 3D object coordinate tracking using LiDAR point cloud data and adopting Gaussian distribution into robot manipulation trajectory planning. We derived our 3D object detection into three methods included auto K-means clustering, deep learning object classification, and convex hull localization. Therefore, a lightweight 3D object classification based on a convolutional neural network (CNN) has been proposed that reached 91% accuracy with 0.34ms inference time on CPU. In empirical experiments, the Gaussian manipulation trajectory planning is applied adult-sized dual-arm robot, which shows efficient object placement with obstacle avoidance.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Landing Site Inspection and Autonomous Pose Correction for Unmanned Aerial Vehicles 无人机着陆点检测与自主姿态校正
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205773
Min-Fan Ricky Lee, A. J., K. Saurav, D. Anshuman
Large number of disturbances and uncertainties in the environment makes landing one of the tricky maneuvers in all the phases of flying an unmanned aerial vehicle. The situation even worsens at the time of emergencies. To allow safe landing of the UAVs on rough terrains with a lot of ground objects, an automatic landing site inspection and real-time pose correction system while landing is in demand in current world situation. This paper presents a method of detection of designated landing sites and autonomously landing in a safe environment. The airborne vision system is utilized with fully convolution neural network to recognize the landing markers on the landing site and object detection. Automatic pose correction algorithm is developed to position the drone for landing in a safe zone and as near to the landing marker as possible. The information from the onboard visual sensors and Inertial Measurement Unit (IMU) is utilized to estimate pose for the perfect landing trajectory. A series of experiments are presented to test and optimize the proposed method.
环境中的大量干扰和不确定性使得着陆成为无人飞行器飞行各个阶段的棘手动作之一。在发生紧急情况时,情况甚至更糟。为了使无人机能够在地形复杂、地物较多的情况下安全着陆,目前国际上迫切需要一种着陆时的自动着陆点检测和实时姿态校正系统。提出了一种探测指定着陆点并在安全环境下自主着陆的方法。机载视觉系统利用全卷积神经网络对着陆点的着陆标志进行识别和目标检测。开发了自动姿态校正算法,将无人机定位在安全区域并尽可能靠近着陆标记。利用机载视觉传感器和惯性测量单元(IMU)的信息来估计最佳着陆轨迹的姿态。通过一系列实验对该方法进行了验证和优化。
{"title":"Landing Site Inspection and Autonomous Pose Correction for Unmanned Aerial Vehicles","authors":"Min-Fan Ricky Lee, A. J., K. Saurav, D. Anshuman","doi":"10.1109/ARIS50834.2020.9205773","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205773","url":null,"abstract":"Large number of disturbances and uncertainties in the environment makes landing one of the tricky maneuvers in all the phases of flying an unmanned aerial vehicle. The situation even worsens at the time of emergencies. To allow safe landing of the UAVs on rough terrains with a lot of ground objects, an automatic landing site inspection and real-time pose correction system while landing is in demand in current world situation. This paper presents a method of detection of designated landing sites and autonomously landing in a safe environment. The airborne vision system is utilized with fully convolution neural network to recognize the landing markers on the landing site and object detection. Automatic pose correction algorithm is developed to position the drone for landing in a safe zone and as near to the landing marker as possible. The information from the onboard visual sensors and Inertial Measurement Unit (IMU) is utilized to estimate pose for the perfect landing trajectory. A series of experiments are presented to test and optimize the proposed method.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122203061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development and Implementation of Novel Six-Sided Automated Optical Inspection for Metallic Objects 新型金属物体六面自动光学检测的开发与实现
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205786
Fauzy Satrio Wibowo, Y. R. Wahyudi, Hsien-I Lin
This paper proposes an inspection system based on the Automated Optical Inspection System (AOI) to inspect six-sided metallic objects. The objective is to develop a system that can provide a good quality of images, with the objects moving on a production line. The proposed system comprises of an industrial robotic arm and a set of cameras. Also, the scanning system provides six-sided inspection, that divided into two stages, i.e., (1) a main-frame inspection (5-side) and (2) an external frame inspection (1-side). An industrial robotic arm is involved to pick-up the object from the production line. Then, the system detected the orientation, shifted the position of the picked object, and calibrated them to the reference orientation and position accordingly. To validate the quality of the images, we use pixel differences to analyze the repeatability of the object pose. According to the experimental results, the system not only provides clear, and it has good performance position repeatability of 4.95 mm.
提出了一种基于自动光学检测系统(AOI)的六面金属物体检测系统。目标是开发一个系统,可以提供高质量的图像,物体在生产线上移动。该系统由一个工业机械臂和一组摄像头组成。此外,扫描系统提供六面检查,该检查分为两个阶段,即(1)主机检查(5面)和(2)外机架检查(1面)。工业机械臂用于从生产线上取下物体。然后,系统检测方向,移动被拾取物体的位置,并将其校准为参考方向和位置。为了验证图像的质量,我们使用像素差异来分析物体姿态的可重复性。根据实验结果,该系统不仅提供清晰,而且具有良好的位置重复性,可达4.95 mm。
{"title":"Development and Implementation of Novel Six-Sided Automated Optical Inspection for Metallic Objects","authors":"Fauzy Satrio Wibowo, Y. R. Wahyudi, Hsien-I Lin","doi":"10.1109/ARIS50834.2020.9205786","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205786","url":null,"abstract":"This paper proposes an inspection system based on the Automated Optical Inspection System (AOI) to inspect six-sided metallic objects. The objective is to develop a system that can provide a good quality of images, with the objects moving on a production line. The proposed system comprises of an industrial robotic arm and a set of cameras. Also, the scanning system provides six-sided inspection, that divided into two stages, i.e., (1) a main-frame inspection (5-side) and (2) an external frame inspection (1-side). An industrial robotic arm is involved to pick-up the object from the production line. Then, the system detected the orientation, shifted the position of the picked object, and calibrated them to the reference orientation and position accordingly. To validate the quality of the images, we use pixel differences to analyze the repeatability of the object pose. According to the experimental results, the system not only provides clear, and it has good performance position repeatability of 4.95 mm.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128526636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CAD-based offline programming platform for welding applications using 6-DOF and 2-DOF robots 基于cad的六自由度和二自由度机器人焊接应用离线编程平台
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205784
Amit Kumar Bedaka, Chyi-Yeu Lin
The main objective of this research is to design and develop an offline programming (OLP) simulation platform for welding applications. The proposed platform was developed using OPEN CASCADE libraries in C++ integration environment to perform a given task on a 6-DOF and 2-DOF robots. In this paper, the welding path is generated autonomously using the CAD features and all the calculations are done within the platform. The OLP simulation environment consists of loading CAD files, kinematics analysis, welding path-planning, welding parameters, motion planning, simulation, and robot execution file. In addition, the proposed platform is capable of generating a collision avoidance path before mapping to a real site.
本研究的主要目的是设计和开发一个用于焊接应用的离线编程(OLP)仿真平台。该平台在c++集成环境下使用OPEN CASCADE库开发,在六自由度和二自由度机器人上执行给定任务。本文利用CAD特性自动生成焊接路径,所有计算均在平台内完成。OLP仿真环境包括加载CAD文件、运动学分析、焊接路径规划、焊接参数、运动规划、仿真和机器人执行文件。此外,该平台能够在映射到真实站点之前生成避碰路径。
{"title":"CAD-based offline programming platform for welding applications using 6-DOF and 2-DOF robots","authors":"Amit Kumar Bedaka, Chyi-Yeu Lin","doi":"10.1109/ARIS50834.2020.9205784","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205784","url":null,"abstract":"The main objective of this research is to design and develop an offline programming (OLP) simulation platform for welding applications. The proposed platform was developed using OPEN CASCADE libraries in C++ integration environment to perform a given task on a 6-DOF and 2-DOF robots. In this paper, the welding path is generated autonomously using the CAD features and all the calculations are done within the platform. The OLP simulation environment consists of loading CAD files, kinematics analysis, welding path-planning, welding parameters, motion planning, simulation, and robot execution file. In addition, the proposed platform is capable of generating a collision avoidance path before mapping to a real site.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123288522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Multi-model Fusion on Real-time Drowsiness Detection for Telemetric Robotics Tracking Applications 基于多模型融合的实时睡意检测在遥测机器人跟踪中的应用
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205780
R. Luo, Chin-Hao Hsu, Yu-Cheng Wen
Drowsiness of driver is one of the common causes resulting in road crashes. According to the research, there have been twenty percent of the road accidents which are related to the drowsiness of drivers. Nowadays, with the development technology, various approaches are introduced to detect the drowsiness of drivers. In this paper, we propose a multi-model fusion system which is composed of the three models to capture driver’s face and detect drowsiness in the real-time for telemetric robotics tracking applications. The sensor device we used is an RGB camera which is mounted in front of driver to obtain the facial image. Then, we combine the results based on the state of the eye blink, yawn and head deviation to determine whether the driver is drowsy. We test our models to obtain the weighting factors in drowsy value. In the experiment, we show that our system has the high accuracy of detection.
司机疲劳是导致道路交通事故的常见原因之一。据调查,百分之二十的交通事故与司机的睡意有关。如今,随着技术的发展,各种检测驾驶员睡意的方法层出不穷。在本文中,我们提出了一种由三种模型组成的多模型融合系统,用于遥测机器人跟踪应用中的驾驶员面部实时捕获和睡意检测。我们使用的传感器装置是安装在驾驶员前方的RGB摄像头来获取面部图像。然后,我们根据眨眼、打哈欠和头部偏差的状态将结果结合起来,确定驾驶员是否昏昏欲睡。我们测试了我们的模型,以获得困倦值的权重因子。实验表明,该系统具有较高的检测精度。
{"title":"Multi-model Fusion on Real-time Drowsiness Detection for Telemetric Robotics Tracking Applications","authors":"R. Luo, Chin-Hao Hsu, Yu-Cheng Wen","doi":"10.1109/ARIS50834.2020.9205780","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205780","url":null,"abstract":"Drowsiness of driver is one of the common causes resulting in road crashes. According to the research, there have been twenty percent of the road accidents which are related to the drowsiness of drivers. Nowadays, with the development technology, various approaches are introduced to detect the drowsiness of drivers. In this paper, we propose a multi-model fusion system which is composed of the three models to capture driver’s face and detect drowsiness in the real-time for telemetric robotics tracking applications. The sensor device we used is an RGB camera which is mounted in front of driver to obtain the facial image. Then, we combine the results based on the state of the eye blink, yawn and head deviation to determine whether the driver is drowsy. We test our models to obtain the weighting factors in drowsy value. In the experiment, we show that our system has the high accuracy of detection.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121976182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Simulation and Control of a Robotic Arm Using MATLAB, Simulink and TwinCAT 基于MATLAB、Simulink和TwinCAT的机械臂仿真与控制
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205777
Wei-chen Lee, Shih-an Kuo
It is challenging to develop robot applications without viewing the robot movement. Besides, it is tedious to establish motion paths and adjust controller parameters using the real robot if there is no simulation program available. To resolve the issues for a low-cost robot, we developed a system that integrated kinematics and motion control simulation using MATLAB and Simulink. The system can then be connected to a real robot by using TwinCAT to verify the simulation results. Case studies were conducted to demonstrate that the system worked well and can be applied to those robotic arms without simulators.
在不观察机器人运动的情况下开发机器人应用程序是具有挑战性的。此外,在没有仿真程序的情况下,利用真实机器人建立运动路径和调整控制器参数是非常繁琐的。为了解决低成本机器人的问题,我们利用MATLAB和Simulink开发了一个集运动学和运动控制仿真于一体的系统。然后,该系统可以通过TwinCAT与真实机器人连接,验证仿真结果。实例研究表明,该系统运行良好,可应用于无模拟器的机械臂。
{"title":"Simulation and Control of a Robotic Arm Using MATLAB, Simulink and TwinCAT","authors":"Wei-chen Lee, Shih-an Kuo","doi":"10.1109/ARIS50834.2020.9205777","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205777","url":null,"abstract":"It is challenging to develop robot applications without viewing the robot movement. Besides, it is tedious to establish motion paths and adjust controller parameters using the real robot if there is no simulation program available. To resolve the issues for a low-cost robot, we developed a system that integrated kinematics and motion control simulation using MATLAB and Simulink. The system can then be connected to a real robot by using TwinCAT to verify the simulation results. Case studies were conducted to demonstrate that the system worked well and can be applied to those robotic arms without simulators.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Landing Area Recognition using Deep Learning for Unammaned Aerial Vehicles 基于深度学习的无人机着陆区域识别
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205793
Min-Fan Ricky Lee, Asep Nugroho, Tuan-Tang Le, Bahrudin, Saul Nieto Bastida
The lack of an automated Unmanned Aerial Vehicles (UAV) landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace to develop tasks in the logistical transport scenario. This research proposes landing area localization and obstruction detection for UAVs that are based on deep learning faster R-CNN and feature matching algorithm. Which output decides if the landing area is safe or not. The final result has been deployed on the Aerial Mobile Robot Platform and was successfully performed effectively.
缺乏自动无人机(UAV)着陆点探测系统已被确定为允许无人机在民用空域人口稠密地区飞行以发展后勤运输场景任务的主要障碍之一。本研究提出了基于深度学习更快R-CNN和特征匹配算法的无人机着陆区域定位和障碍物检测。哪个输出决定着陆区域是否安全。最终结果已部署在空中移动机器人平台上,并得到了有效的验证。
{"title":"Landing Area Recognition using Deep Learning for Unammaned Aerial Vehicles","authors":"Min-Fan Ricky Lee, Asep Nugroho, Tuan-Tang Le, Bahrudin, Saul Nieto Bastida","doi":"10.1109/ARIS50834.2020.9205793","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205793","url":null,"abstract":"The lack of an automated Unmanned Aerial Vehicles (UAV) landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace to develop tasks in the logistical transport scenario. This research proposes landing area localization and obstruction detection for UAVs that are based on deep learning faster R-CNN and feature matching algorithm. Which output decides if the landing area is safe or not. The final result has been deployed on the Aerial Mobile Robot Platform and was successfully performed effectively.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
One-stage Vehicle Engine Number Recognition System 一级车辆发动机号码识别系统
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205775
Cheng-Hsiung Yang, Han-Shen Feng
This study proposes a one-stage vehicle engine number recognition system which avoids using the traditional three-stage recognition procedures of positioning, segmentation, and then character recognition, without the needs of image preprocessing procedures, we directly locate and recognizes the text targets in the vehicle engine image. The experiment using 926 labeled images via transfer learning to train our prediction model and then using this prediction model to test another 2310 unlabeled images, the overall accuracy achieved 99.48% and the execution time for recognize a single image is 234ms.
本研究提出了一种一期汽车发动机编号识别系统,避免了传统的定位、分割、字符识别三阶段识别流程,不需要图像预处理流程,直接对汽车发动机图像中的文本目标进行定位和识别。实验使用926张带标签的图像通过迁移学习训练我们的预测模型,然后使用该预测模型测试另外2310张未标记的图像,总体准确率达到99.48%,单个图像识别的执行时间为234ms。
{"title":"One-stage Vehicle Engine Number Recognition System","authors":"Cheng-Hsiung Yang, Han-Shen Feng","doi":"10.1109/ARIS50834.2020.9205775","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205775","url":null,"abstract":"This study proposes a one-stage vehicle engine number recognition system which avoids using the traditional three-stage recognition procedures of positioning, segmentation, and then character recognition, without the needs of image preprocessing procedures, we directly locate and recognizes the text targets in the vehicle engine image. The experiment using 926 labeled images via transfer learning to train our prediction model and then using this prediction model to test another 2310 unlabeled images, the overall accuracy achieved 99.48% and the execution time for recognize a single image is 234ms.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133371301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Path Following for Autonomous Tractor under Various Soil Conditions and Unstable Lateral Dynamic 不同土壤条件和不稳定横向动力下自动拖拉机路径跟踪研究
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205792
Min-Fan Ricky Lee, Asep Nugroho, W. Purbowaskito, Saul Nieto Bastida, Bahrudin
Lighten the job of the agricultural vehicle operators by providing some autonomous functions is an important field of research, whose most important challenges are to keep the accuracy and optimize the yields. Autonomous navigation of a tractor involves the control of different kinematic and dynamic subsystems, such as the tractor positions, the yaw angle and the longitudinal speed dynamics. The dynamic behavior is highly correlated with the soil conditions of the agricultural field. This paper proposes a Lyapunov’s stability theorem (LST) based kinematic controller for path following in autonomous tractor. Moreover, a Fuzzy-PID controller is employed to control the longitudinal dynamic, and a linear quadratic regulator (LQR) based state-feedback controller to handle the lateral dynamic behavior. Numerical simulation results in MATLAB software show the proposed algorithms can handle the uncertainty of the soil conditions represented by the variations of the rolling friction coefficient.
通过提供一些自主功能来减轻农用车辆操作员的工作是一个重要的研究领域,其最重要的挑战是保持准确性和优化产量。牵引车的自主导航涉及牵引车位置、偏航角和纵向速度动态等不同运动学和动力学子系统的控制。其动力特性与农田土壤条件密切相关。提出了一种基于李雅普诺夫稳定性定理的自动拖拉机路径跟踪运动控制器。此外,采用模糊pid控制器控制系统的纵向动态特性,采用基于线性二次型调节器(LQR)的状态反馈控制器控制系统的横向特性。在MATLAB软件中的数值模拟结果表明,所提出的算法能够处理滚动摩擦系数变化所代表的土壤条件的不确定性。
{"title":"Path Following for Autonomous Tractor under Various Soil Conditions and Unstable Lateral Dynamic","authors":"Min-Fan Ricky Lee, Asep Nugroho, W. Purbowaskito, Saul Nieto Bastida, Bahrudin","doi":"10.1109/ARIS50834.2020.9205792","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205792","url":null,"abstract":"Lighten the job of the agricultural vehicle operators by providing some autonomous functions is an important field of research, whose most important challenges are to keep the accuracy and optimize the yields. Autonomous navigation of a tractor involves the control of different kinematic and dynamic subsystems, such as the tractor positions, the yaw angle and the longitudinal speed dynamics. The dynamic behavior is highly correlated with the soil conditions of the agricultural field. This paper proposes a Lyapunov’s stability theorem (LST) based kinematic controller for path following in autonomous tractor. Moreover, a Fuzzy-PID controller is employed to control the longitudinal dynamic, and a linear quadratic regulator (LQR) based state-feedback controller to handle the lateral dynamic behavior. Numerical simulation results in MATLAB software show the proposed algorithms can handle the uncertainty of the soil conditions represented by the variations of the rolling friction coefficient.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130389613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Estimation of Photosynthetic Growth Signature at the Canopy Scale Using New Genetic Algorithm-Modified Visible Band Triangular Greenness Index 基于改进可见光波段三角形绿度指数遗传算法估算林冠尺度下光合生长特征
Pub Date : 2020-08-01 DOI: 10.1109/ARIS50834.2020.9205787
Ronnie S. Concepcion, Sandy C. Lauguico, Rogelio Ruzcko Tobias, E. Dadios, A. Bandala, E. Sybingco
Greenness index has been proven sensitive to vegetation properties for multispectral and hyperspectral imaging. However, most controlled microclimatic cultivation chambers are equipped with low-cost RGB camera for crop growth monitoring. The lack of camera credentials specially the wavelength sensitivity of visible band provides added challenge in materializing greenness index. The proposed method in this study compensates the unavailability of generic camera peak wavelength sensitivities by employing genetic algorithm (GA) to derive a visible band triangular greenness index (TGI) based on green waveband signal normalized TGI model called gvTGI. The selection, mutation and crossover rates used in configuring the GA model are 0.2, 0.01 and 0.8 respectively. Lettuce images are captured from an aquaponic cultivation chamber for 6-week crop life cycle. The annotated and extracted gvTGI channels are inputted to deep learning models of MobileNetV2, ResNetl01 and InceptionResNetV2 for estimation of photosynthetic growth signatures at canopy scale. In predicting cultivation period in weeks after germination, MobileNetV2 bested other image classification models with accuracy of 80.56%. In estimating canopy area, MobileNetV2 bested other image regression models with $mathrm{R}^{2}$ of 0.9805. The proposed gvTGI proved to be highly accurate on estimation of photosynthetic growth signatures by using generic RGB camera, thus, providing a low-cost alternative for crop phenotyping.
在多光谱和高光谱成像中,绿度指数对植被特性非常敏感。然而,大多数可控小气候栽培室都配备了低成本的RGB摄像机,用于作物生长监测。缺乏相机的认证,特别是可见光波段的波长灵敏度给绿色指数的物化带来了新的挑战。本文提出的方法利用遗传算法(GA),基于绿色波段信号归一化的绿色指数模型(gvTGI),推导出可见光波段三角形绿色指数(TGI),弥补了普通相机峰值波长灵敏度的不可用性。配置遗传模型的选择率、变异率和交叉率分别为0.2、0.01和0.8。生菜图像是从一个水培栽培室捕获的,为期6周的作物生命周期。将注释和提取的gvTGI通道输入到MobileNetV2、ResNetl01和InceptionResNetV2的深度学习模型中,用于估算冠层尺度下的光合生长特征。在预测萌发后数周的培养周期方面,MobileNetV2以80.56%的准确率优于其他图像分类模型。在估算冠层面积方面,MobileNetV2以$ mathm {R}^{2}$ 0.9805优于其他图像回归模型。gvTGI被证明在利用通用RGB相机估计光合生长特征方面具有很高的准确性,从而为作物表型分析提供了一种低成本的替代方法。
{"title":"Estimation of Photosynthetic Growth Signature at the Canopy Scale Using New Genetic Algorithm-Modified Visible Band Triangular Greenness Index","authors":"Ronnie S. Concepcion, Sandy C. Lauguico, Rogelio Ruzcko Tobias, E. Dadios, A. Bandala, E. Sybingco","doi":"10.1109/ARIS50834.2020.9205787","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205787","url":null,"abstract":"Greenness index has been proven sensitive to vegetation properties for multispectral and hyperspectral imaging. However, most controlled microclimatic cultivation chambers are equipped with low-cost RGB camera for crop growth monitoring. The lack of camera credentials specially the wavelength sensitivity of visible band provides added challenge in materializing greenness index. The proposed method in this study compensates the unavailability of generic camera peak wavelength sensitivities by employing genetic algorithm (GA) to derive a visible band triangular greenness index (TGI) based on green waveband signal normalized TGI model called gvTGI. The selection, mutation and crossover rates used in configuring the GA model are 0.2, 0.01 and 0.8 respectively. Lettuce images are captured from an aquaponic cultivation chamber for 6-week crop life cycle. The annotated and extracted gvTGI channels are inputted to deep learning models of MobileNetV2, ResNetl01 and InceptionResNetV2 for estimation of photosynthetic growth signatures at canopy scale. In predicting cultivation period in weeks after germination, MobileNetV2 bested other image classification models with accuracy of 80.56%. In estimating canopy area, MobileNetV2 bested other image regression models with $mathrm{R}^{2}$ of 0.9805. The proposed gvTGI proved to be highly accurate on estimation of photosynthetic growth signatures by using generic RGB camera, thus, providing a low-cost alternative for crop phenotyping.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127331079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1