Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981620
P. Cornelissen, M. Ourak, G. Borghesan, D. Reynaerts, E. V. Poorten
Depth perception remains a key challenge in vitreoretinal surgery. Currently users only have the view from an overhead stereo-microscope available, but this means of visualisation is quite restrictive. Optical Coherence Tomography (OCT) has been introduced in the past and is even integrated in a number of commercial systems to provide a more detailed depth vision, even showing subsurface structures up to a few millimeters below the surface. The intra-operative use of OCT, especially in combination with robotics, is still sub-optimal. At present one can get either a very slowly updating large volume scan (C-scan) or a faster but not aligned cross-sectional B-scan or an even more local single point A-scan at very high update rate. In this work we propose a model-mediated approach. As the posterior eye segment can be approximated as a sphere, we propose to model the retina with this simplified sphere model, the center and radius of which can be estimated in real time. A time-varying Kalman filter is proposed here in combination with an instrument-integrated optical fiber that provides high-frequency A-scans along the longitudinal instrument direction. The model and convergence of the model has been validated first in a simulation environment and subsequently in-silico using an OCT A-scan probe mounted on a co-manipulated vitreoretinal robotic system. The probe was manipulated to measure a 3D stereo lithographically printed spherical model of the posterior eye segment. The feasibility of the proposed method was demonstrated in various scenarios. For the in-silico validation a 20 micrometer error and convergence speed of 2.0 seconds was found when sampling A-scans at 200Hz.
{"title":"Towards Real-time Estimation of a Spherical Eye Model based on a Single Fiber OCT","authors":"P. Cornelissen, M. Ourak, G. Borghesan, D. Reynaerts, E. V. Poorten","doi":"10.1109/ICAR46387.2019.8981620","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981620","url":null,"abstract":"Depth perception remains a key challenge in vitreoretinal surgery. Currently users only have the view from an overhead stereo-microscope available, but this means of visualisation is quite restrictive. Optical Coherence Tomography (OCT) has been introduced in the past and is even integrated in a number of commercial systems to provide a more detailed depth vision, even showing subsurface structures up to a few millimeters below the surface. The intra-operative use of OCT, especially in combination with robotics, is still sub-optimal. At present one can get either a very slowly updating large volume scan (C-scan) or a faster but not aligned cross-sectional B-scan or an even more local single point A-scan at very high update rate. In this work we propose a model-mediated approach. As the posterior eye segment can be approximated as a sphere, we propose to model the retina with this simplified sphere model, the center and radius of which can be estimated in real time. A time-varying Kalman filter is proposed here in combination with an instrument-integrated optical fiber that provides high-frequency A-scans along the longitudinal instrument direction. The model and convergence of the model has been validated first in a simulation environment and subsequently in-silico using an OCT A-scan probe mounted on a co-manipulated vitreoretinal robotic system. The probe was manipulated to measure a 3D stereo lithographically printed spherical model of the posterior eye segment. The feasibility of the proposed method was demonstrated in various scenarios. For the in-silico validation a 20 micrometer error and convergence speed of 2.0 seconds was found when sampling A-scans at 200Hz.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"22 1","pages":"666-672"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87387232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981567
Valentim Ernandes Neto, M. S. Filho, A. Brandão
This work discusses the implementation and experimentation of a control system designed to guide a multi-robot formation of one aerial and three ground robots, in a trajectory tracking task. The implementation considers three partial formations of two robots, each one associated to a virtual structure, a straight line in the 3D space with its own formation controller. Two of the partial line formations are homogeneous formations of two ground vehicles, and the third one is a heterogeneous formation, involving a ground and an aerial vehicles. All the partial formations have the same reference for position, which is one of the ground vehicles. This also allows to discuss what would happen if this ground vehicle, the reference for all the line formations, and thus for the whole formation, presents a failure, what is considered here, in connection to an escorting mission. The result is that the designed control system is effectively able to accomplish the mission, even upon a failure of the reference robot, what is validated by experimental results also discussed here.
{"title":"Null Space-Based Formation Control with Leader Change Possibility","authors":"Valentim Ernandes Neto, M. S. Filho, A. Brandão","doi":"10.1109/ICAR46387.2019.8981567","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981567","url":null,"abstract":"This work discusses the implementation and experimentation of a control system designed to guide a multi-robot formation of one aerial and three ground robots, in a trajectory tracking task. The implementation considers three partial formations of two robots, each one associated to a virtual structure, a straight line in the 3D space with its own formation controller. Two of the partial line formations are homogeneous formations of two ground vehicles, and the third one is a heterogeneous formation, involving a ground and an aerial vehicles. All the partial formations have the same reference for position, which is one of the ground vehicles. This also allows to discuss what would happen if this ground vehicle, the reference for all the line formations, and thus for the whole formation, presents a failure, what is considered here, in connection to an escorting mission. The result is that the designed control system is effectively able to accomplish the mission, even upon a failure of the reference robot, what is validated by experimental results also discussed here.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"106 1","pages":"266-271"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85382533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981587
R. B. Neto, K. Ohno, Thomas Westfechtel, S. Tadokoro
One of the major challenges that autonomous cars are facing today is the unpredictability of pedestrian movement in urban environments. Since pedestrian data acquired by vehicles are sparse observed a pedestrian flow directed graph is proposed to understand pedestrian behavior. In this work, an autonomous electric vehicle is employed to gather LiDAR and camera data. Pedestrian tracking information and semantic information from the environment are used with a probabilistic approach to create the graph. In order to refine the graph a set of outlier removal techniques are described. The graph-based pedestrian flow shows an increase of 61.29 % of coverage zone, and the outlier removal approach successfully removed 81 % of the edges.
{"title":"Pedestrian Flow Estimation Using Sparse Observation for Autonomous Vehicles","authors":"R. B. Neto, K. Ohno, Thomas Westfechtel, S. Tadokoro","doi":"10.1109/ICAR46387.2019.8981587","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981587","url":null,"abstract":"One of the major challenges that autonomous cars are facing today is the unpredictability of pedestrian movement in urban environments. Since pedestrian data acquired by vehicles are sparse observed a pedestrian flow directed graph is proposed to understand pedestrian behavior. In this work, an autonomous electric vehicle is employed to gather LiDAR and camera data. Pedestrian tracking information and semantic information from the environment are used with a probabilistic approach to create the graph. In order to refine the graph a set of outlier removal techniques are described. The graph-based pedestrian flow shows an increase of 61.29 % of coverage zone, and the outlier removal approach successfully removed 81 % of the edges.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"68 1","pages":"779-784"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91379173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981550
D. Macharet, A. A. Neto
The employment of autonomous agents for persistent monitoring tasks has significantly increased in recent years. In this sense, the data collection process must take into account limited resources, such as time and energy, whilst acquiring a sufficient amount of data to generate accurate models of underlying phenomena. Many different schedulers in the literature act in an off-line manner, which means they define the sequence of visit and generate a set of paths before any observations are made. However, on-line approaches can adapt their behavior based on previously collected data, allowing to obtain more precise models. In this paper, we propose an on-line scheduler which evaluates the sampling rate of the signals being measured to assign different priorities to different Points-of-Interest (PoIs). Next, according to this priority, it is determined if a region must be visited more or less frequently to increase our knowledge of the phenomenon. Our methodology was evaluated through several experiments in a simulated environment.
{"title":"Multi-robot On-line Sampling Scheduler for Persistent Monitoring","authors":"D. Macharet, A. A. Neto","doi":"10.1109/ICAR46387.2019.8981550","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981550","url":null,"abstract":"The employment of autonomous agents for persistent monitoring tasks has significantly increased in recent years. In this sense, the data collection process must take into account limited resources, such as time and energy, whilst acquiring a sufficient amount of data to generate accurate models of underlying phenomena. Many different schedulers in the literature act in an off-line manner, which means they define the sequence of visit and generate a set of paths before any observations are made. However, on-line approaches can adapt their behavior based on previously collected data, allowing to obtain more precise models. In this paper, we propose an on-line scheduler which evaluates the sampling rate of the signals being measured to assign different priorities to different Points-of-Interest (PoIs). Next, according to this priority, it is determined if a region must be visited more or less frequently to increase our knowledge of the phenomenon. Our methodology was evaluated through several experiments in a simulated environment.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"44 1","pages":"617-622"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86404984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981579
J. D. M. Osorio, F. Allmendinger, M. D. Fiore, U. Zimmermann, T. Ortmaier
This paper handles the problem of including Cartesian and joint constraints in the stack of tasks for torque-controlled robots. A novel approach is proposed to handle Cartesian constraints and joint constraints on three different levels: position, velocity and acceleration. These constraints are included in the stack of tasks ensuring the maximum possible fulfillment of the tasks despite of the constraints. The algorithm proceeds by creating two tasks with the highest priority in a stack of tasks scheme. The highest priority task saturates the acceleration of the joints that would exceed their motion limits. The second highest priority task saturates the acceleration of the Cartesian directions that would exceed their motion limits. Experiments to test the performance of the algorithm are performed on the KUKA LBR iiwa.
{"title":"Physical Human-Robot Interaction under Joint and Cartesian Constraints","authors":"J. D. M. Osorio, F. Allmendinger, M. D. Fiore, U. Zimmermann, T. Ortmaier","doi":"10.1109/ICAR46387.2019.8981579","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981579","url":null,"abstract":"This paper handles the problem of including Cartesian and joint constraints in the stack of tasks for torque-controlled robots. A novel approach is proposed to handle Cartesian constraints and joint constraints on three different levels: position, velocity and acceleration. These constraints are included in the stack of tasks ensuring the maximum possible fulfillment of the tasks despite of the constraints. The algorithm proceeds by creating two tasks with the highest priority in a stack of tasks scheme. The highest priority task saturates the acceleration of the joints that would exceed their motion limits. The second highest priority task saturates the acceleration of the Cartesian directions that would exceed their motion limits. Experiments to test the performance of the algorithm are performed on the KUKA LBR iiwa.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"13 1","pages":"185-191"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78275345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981634
Huan Liu, Qitao Huang, Zhizhong Tong
This paper presents the design and control architecture of a novel full-active powered ankle prosthesis which uses integrated force-controllable electro-hydrostatic actuator (EHA) to provide both initiative compliance and sufficient positive power output at terminal stance to assist walking in whole gait cycle. A 100W brushless DC motor driving a 0.45 cc/rev bi-directional gear pump operates as the power kernel. Based on finite-state machine (FSM), a hierarchical controller was designed to ensure the control system performance while different control strategies were implemented on each individual gait phase. Three independent force sensing resistor (FSR) mounted under sole, two pressure transducers and a displacement sensor used as ankle rotation sensor provide feedback signal for both state detection and low-level impedance control. A simulation model of the ankle prosthesis system was established with the help of Matlab/Simulink to validate its feasibility. Using pre-sampled biomechanics profile as input variable and matched group, the conceptual ankle prosthesis turns out to be able to restore the dynamic interaction response of a wholesome ankle-foot to a great extent.
{"title":"Simulation and Analysis of a Full-Active Electro-Hydrostatic Powered Ankle Prosthesis","authors":"Huan Liu, Qitao Huang, Zhizhong Tong","doi":"10.1109/ICAR46387.2019.8981634","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981634","url":null,"abstract":"This paper presents the design and control architecture of a novel full-active powered ankle prosthesis which uses integrated force-controllable electro-hydrostatic actuator (EHA) to provide both initiative compliance and sufficient positive power output at terminal stance to assist walking in whole gait cycle. A 100W brushless DC motor driving a 0.45 cc/rev bi-directional gear pump operates as the power kernel. Based on finite-state machine (FSM), a hierarchical controller was designed to ensure the control system performance while different control strategies were implemented on each individual gait phase. Three independent force sensing resistor (FSR) mounted under sole, two pressure transducers and a displacement sensor used as ankle rotation sensor provide feedback signal for both state detection and low-level impedance control. A simulation model of the ankle prosthesis system was established with the help of Matlab/Simulink to validate its feasibility. Using pre-sampled biomechanics profile as input variable and matched group, the conceptual ankle prosthesis turns out to be able to restore the dynamic interaction response of a wholesome ankle-foot to a great extent.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"81-86"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88276756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981660
Adriano M. C. Rezende, Victor R. F. Miranda, Henrique N. Machado, Antonio C. B. Chiella, V. M. Gonçalves, G. Freitas
In this paper, we present a methodology to make an autonomous drone fly through a sequence of gates only with on-board sensors. Our work is a solution to the AlphaPilot Challenge, proposed by the Lookheed Martin Company and the Drone Racing League. First, we propose a strategy to generate a smooth trajectory that passes through the gates. Then, we develop a localization system, which merges image data from an on-board camera with IMU data. Finally, we present an artificial vector field based strategy used to control the quadcopter. Our results are validated with simulations in the official simulator of the competition and with preliminary experiments with a real drone.
{"title":"Autonomous System for a Racing Quadcopter","authors":"Adriano M. C. Rezende, Victor R. F. Miranda, Henrique N. Machado, Antonio C. B. Chiella, V. M. Gonçalves, G. Freitas","doi":"10.1109/ICAR46387.2019.8981660","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981660","url":null,"abstract":"In this paper, we present a methodology to make an autonomous drone fly through a sequence of gates only with on-board sensors. Our work is a solution to the AlphaPilot Challenge, proposed by the Lookheed Martin Company and the Drone Racing League. First, we propose a strategy to generate a smooth trajectory that passes through the gates. Then, we develop a localization system, which merges image data from an on-board camera with IMU data. Finally, we present an artificial vector field based strategy used to control the quadcopter. Our results are validated with simulations in the official simulator of the competition and with preliminary experiments with a real drone.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"10 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83707078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981621
Kedar Karpe, Ayon Chatterjee, P. Srinivas, Dhanalakshmi Samiappan, Kumar Ramamoorthy, Lorenzo Sabattini
This paper presents SPRINTER, a system and method for multi-robot printing. In this paper, we discuss the design of a quasi-holonomic mobile robot and present a method which uses a group of such robots to distributively print a large graphical image. In the distributive printing method, we introduce the concept of image cellularization for segmenting the graphic into a group of smaller printing tasks. We then discuss a centralized method to allocate these tasks to each robot and execute the printing process. In summary, we present a multi-robot printing system which enhances the printing speed and maximizes the printing area of traditional industrial printers.
{"title":"SPRINTER: A Discrete Locomotion Robot for Precision Swarm Printing","authors":"Kedar Karpe, Ayon Chatterjee, P. Srinivas, Dhanalakshmi Samiappan, Kumar Ramamoorthy, Lorenzo Sabattini","doi":"10.1109/ICAR46387.2019.8981621","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981621","url":null,"abstract":"This paper presents SPRINTER, a system and method for multi-robot printing. In this paper, we discuss the design of a quasi-holonomic mobile robot and present a method which uses a group of such robots to distributively print a large graphical image. In the distributive printing method, we introduce the concept of image cellularization for segmenting the graphic into a group of smaller printing tasks. We then discuss a centralized method to allocate these tasks to each robot and execute the printing process. In summary, we present a multi-robot printing system which enhances the printing speed and maximizes the printing area of traditional industrial printers.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"27 1","pages":"733-738"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85991948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981646
A. Geraldes, P. Fiorini, L. Mattos
Endoscopic laser surgery is a minimally invasive procedure in which a fiber laser tool is used to perform precise incisions in soft tissue. Although the precision of such incisions depends on the proper focusing of the laser, endoscopic laser tools use no optics at all, due to the limited space in the endoscopic system. Instead, they rely on placing the tip of the fiber in direct contact with the tissue, which often leads to tissue carbonization. To solve this problem, we developed a compact auto-focusing system based on a MEMS varifocal mirror. The proposed system is able to ensure the focusing of the laser by controlling the deflection of the varifocal mirror using hydraulic actuation. Validation experiments showed that the system is able to keep the variation of the laser spot diameter under 3% for a distance range between 12.15 and 52.15 mm.
{"title":"An Auto-Focusing System for Endoscopic Laser Surgery based on a Hydraulic MEMS Varifocal Mirror","authors":"A. Geraldes, P. Fiorini, L. Mattos","doi":"10.1109/ICAR46387.2019.8981646","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981646","url":null,"abstract":"Endoscopic laser surgery is a minimally invasive procedure in which a fiber laser tool is used to perform precise incisions in soft tissue. Although the precision of such incisions depends on the proper focusing of the laser, endoscopic laser tools use no optics at all, due to the limited space in the endoscopic system. Instead, they rely on placing the tip of the fiber in direct contact with the tissue, which often leads to tissue carbonization. To solve this problem, we developed a compact auto-focusing system based on a MEMS varifocal mirror. The proposed system is able to ensure the focusing of the laser by controlling the deflection of the varifocal mirror using hydraulic actuation. Validation experiments showed that the system is able to keep the variation of the laser spot diameter under 3% for a distance range between 12.15 and 52.15 mm.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"11 1 1","pages":"660-665"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76548056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981545
K. Alexis
This paper presents a comprehensive solution to enable resilient autonomous exploration and mapping of underground mines using aerial robots. The described methods and systems address critical challenges related to autonomy, perception and localization in conditions of sensor degradation, exploratory path planning in geometrically complex, large and multi-branching environments, alongside reliable robot operation in communications-denied settings. To facilitate resilient autonomy in such conditions, a set of novel contributions in multi-modal sensor fusion, graph-based path planning, and robot design have been proposed and integrated in micro aerial vehicles which are not subject to the challenging terrain found in such subterranean settings. The capabilities and performance of the proposed solution is field-verified through a set of real-life autonomous deployments in underground metal mines.
{"title":"Resilient Autonomous Exploration and Mapping of Underground Mines using Aerial Robots","authors":"K. Alexis","doi":"10.1109/ICAR46387.2019.8981545","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981545","url":null,"abstract":"This paper presents a comprehensive solution to enable resilient autonomous exploration and mapping of underground mines using aerial robots. The described methods and systems address critical challenges related to autonomy, perception and localization in conditions of sensor degradation, exploratory path planning in geometrically complex, large and multi-branching environments, alongside reliable robot operation in communications-denied settings. To facilitate resilient autonomy in such conditions, a set of novel contributions in multi-modal sensor fusion, graph-based path planning, and robot design have been proposed and integrated in micro aerial vehicles which are not subject to the challenging terrain found in such subterranean settings. The capabilities and performance of the proposed solution is field-verified through a set of real-life autonomous deployments in underground metal mines.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"53 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76939514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}