A three-tracked-link robot was designed previously for autonomous welding inside double-hulled ship blocks with tight spaces and protruding stiffeners. Bilge blocks, a type of double-hulled blocks, have a tall wall at the entrance. Climbing down from this tall wall involves a risk of toppling as neither of the three links of the robot (front arm, body, and rear arm) is long enough to reach the ground from the wall top, and the robot carries a heavy manipulator for welding. Instead of being a burden, we explore the use of the manipulator motion to shift the center of gravity, helping the robot climbing down safely. In this paper, we proposed the use of reinforcement learning and physics-based computer simulation to determine suitable motion sequences for safe climbing down from a tall wall. We discovered two effective safe-landing modes that use both arms for major balancing acts and a manipulator for balance trimming during the controlled landing. The method also allowed us to explore the effect of other design factors such as the choice of manipulator size, manipulator motion type, and change in environment on the motion sequence.
{"title":"Landing A Mobile Robot Safely from Tall Walls Using Manipulator Motion Generated from Reinforcement Learning","authors":"C. Goh, Kyshalee Vazquez-Santiago, K. Shimada","doi":"10.1109/CASE48305.2020.9216977","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216977","url":null,"abstract":"A three-tracked-link robot was designed previously for autonomous welding inside double-hulled ship blocks with tight spaces and protruding stiffeners. Bilge blocks, a type of double-hulled blocks, have a tall wall at the entrance. Climbing down from this tall wall involves a risk of toppling as neither of the three links of the robot (front arm, body, and rear arm) is long enough to reach the ground from the wall top, and the robot carries a heavy manipulator for welding. Instead of being a burden, we explore the use of the manipulator motion to shift the center of gravity, helping the robot climbing down safely. In this paper, we proposed the use of reinforcement learning and physics-based computer simulation to determine suitable motion sequences for safe climbing down from a tall wall. We discovered two effective safe-landing modes that use both arms for major balancing acts and a manipulator for balance trimming during the controlled landing. The method also allowed us to explore the effect of other design factors such as the choice of manipulator size, manipulator motion type, and change in environment on the motion sequence.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117111621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging Pattern Mining (EPM) is a data mining task that finds discriminative characteristics between classes or data sets. In this paper, several EPM algorithms were applied to a data set which contains the opinions of the medical staff from a Mexican hospital about the decrease of autopsies. We consider two attributes as class labels: motives for autopsy acceptance and motives for autopsy rejection in order to find aspects like medical training and medical experience that imply that physicians consider reasons for requesting or rejecting autopsies.
{"title":"Discovering Emerging Patterns from Medical Opinions about the Decrease of Autopsies Performed in a Mexican Hospital","authors":"Ingrid Aylin Ríos-Méndez, L. Rodríguez-Mazahua, J. P. Guzmán, Isaac Machorro-Cano, S. G. Peláez-Camarena, Celia Romero Torres, Hilarión Muñoz Contreras","doi":"10.1109/CASE48305.2020.9216751","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216751","url":null,"abstract":"Emerging Pattern Mining (EPM) is a data mining task that finds discriminative characteristics between classes or data sets. In this paper, several EPM algorithms were applied to a data set which contains the opinions of the medical staff from a Mexican hospital about the decrease of autopsies. We consider two attributes as class labels: motives for autopsy acceptance and motives for autopsy rejection in order to find aspects like medical training and medical experience that imply that physicians consider reasons for requesting or rejecting autopsies.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121120815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study a nursing home staff schedule optimization problem under resident demand uncertainty. We formulate a two-stage stochastic binary program accordingly, with objective to minimize the total labor cost (linearly related to work time) incurred by both regular registered nurses (RRNs) and part-time nurses (PTNs). As a significant constraint, we balance RRNs’ total amount of work time with residents’ total service need for every considered shift. Besides, we restrict feasible shift schedules based on common scheduling practice. We conduct a series of computational experiments to validate the proposed model. We discuss our optimal solutions under different compositions of residents in terms of their disabilities. In addition, we compare the total labor costs and an RRN scheduling flexibility index with the given optimal solution under different combinations of RRNs and PTNs. Our analysis offers an operational approach to set the minimum number of nurses on flexible shift schedules to cover uncertain the service needs while maintaining a minimum labor cost.
{"title":"Optimal Nursing Home Shift Scheduling: A Two-Stage Stochastic Programming Approach","authors":"Shujin Jiang, Mingyang Li, K. Hyer, N. Kong","doi":"10.1109/CASE48305.2020.9216967","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216967","url":null,"abstract":"In this paper, we study a nursing home staff schedule optimization problem under resident demand uncertainty. We formulate a two-stage stochastic binary program accordingly, with objective to minimize the total labor cost (linearly related to work time) incurred by both regular registered nurses (RRNs) and part-time nurses (PTNs). As a significant constraint, we balance RRNs’ total amount of work time with residents’ total service need for every considered shift. Besides, we restrict feasible shift schedules based on common scheduling practice. We conduct a series of computational experiments to validate the proposed model. We discuss our optimal solutions under different compositions of residents in terms of their disabilities. In addition, we compare the total labor costs and an RRN scheduling flexibility index with the given optimal solution under different combinations of RRNs and PTNs. Our analysis offers an operational approach to set the minimum number of nurses on flexible shift schedules to cover uncertain the service needs while maintaining a minimum labor cost.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127462519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modeling accurate material flow simulation is a time-consuming task and requires high expertise about both simulation techniques and production system. Recently, data-driven modeling approaches that build simulation models from production log are gathering attentions to automate the modeling process. However, in most practical cases, production log does not have enough resolution to specify the input and output of each agent in material flow simulation such as processing time agent and dispatching agent. For the issue, we proposed a novel approach and method that models multiple agents simultaneously from sparse production log. In our method, agents are described as machine learning models, then parameters in the models are calibrated to minimize simulation error. We confirmed the usefulness of the proposed method through experiments with virtual production system.
{"title":"Toward data-driven modeling of material flow simulation: automatic parameter calibration of multiple agents from sparse production log","authors":"S. Nagahara, Susumu Serita, Yuma Shiho, Shuai Zheng, Haiyan Wang, Takafumi Chida, Chetan Gupta","doi":"10.1109/CASE48305.2020.9216832","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216832","url":null,"abstract":"Modeling accurate material flow simulation is a time-consuming task and requires high expertise about both simulation techniques and production system. Recently, data-driven modeling approaches that build simulation models from production log are gathering attentions to automate the modeling process. However, in most practical cases, production log does not have enough resolution to specify the input and output of each agent in material flow simulation such as processing time agent and dispatching agent. For the issue, we proposed a novel approach and method that models multiple agents simultaneously from sparse production log. In our method, agents are described as machine learning models, then parameters in the models are calibrated to minimize simulation error. We confirmed the usefulness of the proposed method through experiments with virtual production system.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125443263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image instance segmentation plays an important role in mechanical search, a task where robots must search for a target object in a cluttered scene. Perception pipelines for this task often rely on target object color or depth information and require multiple networks to segment and identify the target object. However, creating large training datasets of real images for these networks can be time intensive and the networks may require retraining for novel objects. We propose OSSIS, a single-stage One-Shot Shape-based Instance Segmentation algorithm that produces the target object modal segmentation mask in a depth image of a scene based only on a binary shape mask of the target object. We train a fully-convolutional Siamese network with 800, 000 pairs of synthetic binary target object masks and scene depth images, then evaluate the network with real target objects never seen during training in densely-cluttered scenes with target object occlusions. OSSIS achieves a one-shot mean intersection-over-union (mIoU) of 0.38 on the real data, improving on filter matching and two-stage CNN baselines by 21% and 6%, respectively, while reducing computation time by 50 times as compared to the two-stage CNN due in part to the fact that OSSIS is one-stage and does not require pairwise segmentation mask comparisons.
{"title":"One-Shot Shape-Based Amodal-to-Modal Instance Segmentation","authors":"Andrew Li, Michael Danielczuk, Ken Goldberg","doi":"10.1109/CASE48305.2020.9216733","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216733","url":null,"abstract":"Image instance segmentation plays an important role in mechanical search, a task where robots must search for a target object in a cluttered scene. Perception pipelines for this task often rely on target object color or depth information and require multiple networks to segment and identify the target object. However, creating large training datasets of real images for these networks can be time intensive and the networks may require retraining for novel objects. We propose OSSIS, a single-stage One-Shot Shape-based Instance Segmentation algorithm that produces the target object modal segmentation mask in a depth image of a scene based only on a binary shape mask of the target object. We train a fully-convolutional Siamese network with 800, 000 pairs of synthetic binary target object masks and scene depth images, then evaluate the network with real target objects never seen during training in densely-cluttered scenes with target object occlusions. OSSIS achieves a one-shot mean intersection-over-union (mIoU) of 0.38 on the real data, improving on filter matching and two-stage CNN baselines by 21% and 6%, respectively, while reducing computation time by 50 times as compared to the two-stage CNN due in part to the fact that OSSIS is one-stage and does not require pairwise segmentation mask comparisons.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116062871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fault diagnosis is a vital technique to pinpoint the machine malfunctions in manufacturing systems. In recent years, the deep learning techniques greatly improve the fault detection accuracy, but there still remain some problems. If one fault is absent in the training data or the fault signal is disturbed by severe noise interference, the fault classifier may misjudge the health state. This problem limits the reliability of the fault diagnosis in real applications. In this paper, we enhance the fault diagnosis method by using Bayesian Convolutional Neural Network (BCNN). A Shannon entropy-based method is presented to quantify the prediction uncertainty. The BCNN turns the deterministic predictions to probabilistic distributions and enhances the robustness of the fault diagnosis. The uncertainty quantification method helps to indicate the wrong predictions, detect unknown faults, and discover the strong disturbances. Then, a fine-tuning strategy is applied to enhance the model performance further. The potential usability of the proposed method in monitoring the motors of 3D printers is studied. And the experiment is conducted on a motor bearing dataset provided by Case Western Reserve University. The proposed BCNN achieves 99.82% fault classification accuracy over nine health conditions. Its robustness is verified by comparing the testing accuracy with three other methods on the noisy datasets. And the uncertainty quantification method successfully detects the outlier inputs.
{"title":"An Enhanced Fault Diagnosis Method with Uncertainty Quantification Using Bayesian Convolutional Neural Network","authors":"Qihang Fang, Gang Xiong, Xiuqin Shang, Sheng Liu, Bin Hu, Zhen Shen","doi":"10.1109/CASE48305.2020.9216773","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216773","url":null,"abstract":"Fault diagnosis is a vital technique to pinpoint the machine malfunctions in manufacturing systems. In recent years, the deep learning techniques greatly improve the fault detection accuracy, but there still remain some problems. If one fault is absent in the training data or the fault signal is disturbed by severe noise interference, the fault classifier may misjudge the health state. This problem limits the reliability of the fault diagnosis in real applications. In this paper, we enhance the fault diagnosis method by using Bayesian Convolutional Neural Network (BCNN). A Shannon entropy-based method is presented to quantify the prediction uncertainty. The BCNN turns the deterministic predictions to probabilistic distributions and enhances the robustness of the fault diagnosis. The uncertainty quantification method helps to indicate the wrong predictions, detect unknown faults, and discover the strong disturbances. Then, a fine-tuning strategy is applied to enhance the model performance further. The potential usability of the proposed method in monitoring the motors of 3D printers is studied. And the experiment is conducted on a motor bearing dataset provided by Case Western Reserve University. The proposed BCNN achieves 99.82% fault classification accuracy over nine health conditions. Its robustness is verified by comparing the testing accuracy with three other methods on the noisy datasets. And the uncertainty quantification method successfully detects the outlier inputs.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128432330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of mobile robot technology, robots are playing an increasingly important role in people’s daily lives. As one of the key technologies of the basic functions of mobile robots, navigation also needs to deal with new challenges. How to navigate efficiently and collision-free in complex and changeable human environments is one of the problems that need to be solved urgently. Currently, mobile robots can achieve efficient navigation in static environments. However, in the unstructured and fast-changing environments of human daily society, robots need to make more flexible navigation strategies to deal with the dynamic scenarios. This paper built a 3D simulation environment for robot navigation via deep reinforcement learning in dense pedestrian environment. We also proposed a new navigation approach via deep reinforcement learning in dense pedestrian environment. The simulation environment of this paper integrates Gazebo, ROS navigation stack, Stable baselines and Social Force Pedestrian Simulator. In order to be able to collect the rich environmental information around the robot, our simulation environment is based on the Gazebo simulation platform. In order to use the traditional path planning methods, we introduce the ROS navigation stack. In order to make it easier to call the current mainstream reinforcement learning algorithms, we introduce Stable baselines which is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. In order to imitate dense pedestrian scenarios realistically, we introduce the Social Force Pedestrian Simulator which is a pedestrian simulation package, whose pedestrian’s movement follows the rules of Social Force Movement. Our robot navigation approach combines the global optimality of traditional global path planning and the local barrier ability of reinforcement learning. Firstly, we plan global path by using A* algorithm. Secondly, we use Soft Actor Critic (SAC) to try to follow the waypoints generated at a certain distance on the global path to make action decisions on the premise of agile obstacle avoidance. Experiments show that our simulation environment can easily set up a robot navigation environment and navigation approaches can be simulated in various dense pedestrian environments.
随着移动机器人技术的飞速发展,机器人在人们的日常生活中扮演着越来越重要的角色。导航作为移动机器人基本功能的关键技术之一,也需要应对新的挑战。如何在复杂多变的人类环境中高效、无碰撞地进行导航是目前迫切需要解决的问题之一。目前,移动机器人可以在静态环境中实现高效导航。然而,在人类日常社会的非结构化和快速变化的环境中,机器人需要制定更灵活的导航策略来应对动态场景。本文通过深度强化学习,构建了密集行人环境下机器人导航的三维仿真环境。我们还提出了一种新的基于深度强化学习的密集行人环境下的导航方法。本文的仿真环境集成了Gazebo、ROS导航堆栈、Stable基线和Social Force Pedestrian Simulator。为了能够采集机器人周围丰富的环境信息,我们的仿真环境基于Gazebo仿真平台。为了沿用传统的路径规划方法,我们引入了ROS导航堆栈。为了方便调用当前主流的强化学习算法,我们引入了Stable baselines,这是一组基于OpenAI baselines的强化学习算法的改进实现。为了逼真地模拟密集的行人场景,我们引入了Social Force pedestrian Simulator,这是一个行人仿真包,行人的运动遵循Social Force的运动规则。我们的机器人导航方法结合了传统全局路径规划的全局最优性和强化学习的局部障碍能力。首先,采用A*算法规划全局路径;其次,在敏捷避障的前提下,利用软行为批评家(Soft Actor Critic, SAC)尝试在全局路径上一定距离处生成的路径点进行行动决策。实验表明,我们的仿真环境可以很容易地建立一个机器人导航环境,并且可以在各种密集的行人环境中模拟导航方法。
{"title":"A 3D Simulation Environment and Navigation Approach for Robot Navigation via Deep Reinforcement Learning in Dense Pedestrian Environment","authors":"Qi Liu, Yanjie Li, Lintao Liu","doi":"10.1109/CASE48305.2020.9217023","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9217023","url":null,"abstract":"With the rapid development of mobile robot technology, robots are playing an increasingly important role in people’s daily lives. As one of the key technologies of the basic functions of mobile robots, navigation also needs to deal with new challenges. How to navigate efficiently and collision-free in complex and changeable human environments is one of the problems that need to be solved urgently. Currently, mobile robots can achieve efficient navigation in static environments. However, in the unstructured and fast-changing environments of human daily society, robots need to make more flexible navigation strategies to deal with the dynamic scenarios. This paper built a 3D simulation environment for robot navigation via deep reinforcement learning in dense pedestrian environment. We also proposed a new navigation approach via deep reinforcement learning in dense pedestrian environment. The simulation environment of this paper integrates Gazebo, ROS navigation stack, Stable baselines and Social Force Pedestrian Simulator. In order to be able to collect the rich environmental information around the robot, our simulation environment is based on the Gazebo simulation platform. In order to use the traditional path planning methods, we introduce the ROS navigation stack. In order to make it easier to call the current mainstream reinforcement learning algorithms, we introduce Stable baselines which is a set of improved implementations of reinforcement learning algorithms based on OpenAI Baselines. In order to imitate dense pedestrian scenarios realistically, we introduce the Social Force Pedestrian Simulator which is a pedestrian simulation package, whose pedestrian’s movement follows the rules of Social Force Movement. Our robot navigation approach combines the global optimality of traditional global path planning and the local barrier ability of reinforcement learning. Firstly, we plan global path by using A* algorithm. Secondly, we use Soft Actor Critic (SAC) to try to follow the waypoints generated at a certain distance on the global path to make action decisions on the premise of agile obstacle avoidance. Experiments show that our simulation environment can easily set up a robot navigation environment and navigation approaches can be simulated in various dense pedestrian environments.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many studies indicate that multi-event accidents have higher risks and injuries than a single accident. Larger heading angle and lateral deviation could be produced by impulsive impacts, which result in severe accidents. Active safety technology can be applied for controlling the motion after impact. In this paper, a sub-optimal second-order sliding mode (SSOSM) controller based on a three degree of freedom vehicle model is designed to mitigate the undesired motion after a fictitious impact. The controller is verified through high fidelity co-simulation platform Carsim and MATLAB/Simulink. The simulation results show that the SSOSM controller gives better control performance for tracking the desired heading angle and lateral deviation from the path. The chattering phenomenon of sliding mode is decreased and smooth results are given.
{"title":"Robust Active Post-Impact Motion Control for Restraining a Second Crash","authors":"Di Ao, Xingqi Hua, Guokuan Yu, Daqiang Guo, Zhenzhong Jia","doi":"10.1109/CASE48305.2020.9216882","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216882","url":null,"abstract":"Many studies indicate that multi-event accidents have higher risks and injuries than a single accident. Larger heading angle and lateral deviation could be produced by impulsive impacts, which result in severe accidents. Active safety technology can be applied for controlling the motion after impact. In this paper, a sub-optimal second-order sliding mode (SSOSM) controller based on a three degree of freedom vehicle model is designed to mitigate the undesired motion after a fictitious impact. The controller is verified through high fidelity co-simulation platform Carsim and MATLAB/Simulink. The simulation results show that the SSOSM controller gives better control performance for tracking the desired heading angle and lateral deviation from the path. The chattering phenomenon of sliding mode is decreased and smooth results are given.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129162197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the mining industry, it is usual to employ several kilometers of pipes to carry tailing from the plant to a dam. Only in the Salobo Mine, a copper operation in the Amazon forest from Vale S.A., there are more than three and a half kilometers of tailing pipes. Since the material passing through the tailing pipe causes an abrasion effect that could lead to failures, regular inspections are needed. However, given the risky environment to perform manual inspections, a teleoperated or autonomous robot is a crucial tool to keep track of the pipe health. In this work, we propose a deep-learning methodology to process the data stream of images from the robot, aiming to detect early failures directly on the onboard computer of the device in real-time. Multiple architectures of deep-learning image classification were evaluated to detect the anomalies. We validated the early damage detection accuracy and pinpointed the approximate location of the anomalies using the Class Activation Mapping of the networks. Then, we tested the runtime for the network architectures that obtained the best results on different hardware to analyze the need for a GPU onboard in the robot. Moreover, we also trained a Single Shot object Detector to find the boundaries of the pipe joints, which means that the anomaly classification is performed only when a joint is detected. Our results show that it is possible to build an automatic anomaly detection system in the software of the robot.
{"title":"Deep Learning for Early Damage Detection of Tailing Pipes Joints with a Robotic Device","authors":"Levi Welington de Resende Filho, A. Santos, Héctor Azpúrua, G. Garcia, G. Pessin","doi":"10.1109/CASE48305.2020.9216881","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9216881","url":null,"abstract":"In the mining industry, it is usual to employ several kilometers of pipes to carry tailing from the plant to a dam. Only in the Salobo Mine, a copper operation in the Amazon forest from Vale S.A., there are more than three and a half kilometers of tailing pipes. Since the material passing through the tailing pipe causes an abrasion effect that could lead to failures, regular inspections are needed. However, given the risky environment to perform manual inspections, a teleoperated or autonomous robot is a crucial tool to keep track of the pipe health. In this work, we propose a deep-learning methodology to process the data stream of images from the robot, aiming to detect early failures directly on the onboard computer of the device in real-time. Multiple architectures of deep-learning image classification were evaluated to detect the anomalies. We validated the early damage detection accuracy and pinpointed the approximate location of the anomalies using the Class Activation Mapping of the networks. Then, we tested the runtime for the network architectures that obtained the best results on different hardware to analyze the need for a GPU onboard in the robot. Moreover, we also trained a Single Shot object Detector to find the boundaries of the pipe joints, which means that the anomaly classification is performed only when a joint is detected. Our results show that it is possible to build an automatic anomaly detection system in the software of the robot.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129291987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an end-to-end system for automatically deploying an Autonomous Mobile Robot (AMR) in a warehouse for point-to-point transportation tasks. Our approach includes creating a motion library that provides guarantees on the motion of the AMR, automatically creating a graph representing safe motions in the warehouse, and using Linear Temporal Logic (LTL) specifications and synthesis to compose appropriate motion primitives to accomplish a task, even in the presence of other people and robots in the warehouse. We demonstrate our approach in simulation and with a physical robot.
{"title":"Warehouse Automation in a Day: From Model to Implementation with Provable Guarantees","authors":"Guy Scher, H. Kress-Gazit","doi":"10.1109/CASE48305.2020.9217012","DOIUrl":"https://doi.org/10.1109/CASE48305.2020.9217012","url":null,"abstract":"We present an end-to-end system for automatically deploying an Autonomous Mobile Robot (AMR) in a warehouse for point-to-point transportation tasks. Our approach includes creating a motion library that provides guarantees on the motion of the AMR, automatically creating a graph representing safe motions in the warehouse, and using Linear Temporal Logic (LTL) specifications and synthesis to compose appropriate motion primitives to accomplish a task, even in the presence of other people and robots in the warehouse. We demonstrate our approach in simulation and with a physical robot.","PeriodicalId":212181,"journal":{"name":"2020 IEEE 16th International Conference on Automation Science and Engineering (CASE)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130593945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}