Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981575
Cicero L. Costa, Túlia A. A. Macedo, C. Barcelos
Pelvic dysfunction mainly affects adult women, it is estimated that 15% of multiparous women suffer from the problem. Dysfunctions can be diagnosed by defecography, a dynamic MRI scan. Images are used by specialists to diagnose organ dysfunction such as the bladder and the early rectum. This paper presents an automated classification system that uses a non-rigid registration based on a variational model to create automatic markings from initial markings made by an expert. The classification is based on simple average and the centroids of the K-means grouping technique. The classification made by the system is evaluated by confusion matrix based metrics. The obtained results using 21 defecography exams from 21 different patients indicate that the proposed technique is a promising tool in the diagnosis of pelvic floor disorders and can assist the physician in the diagnostic process.
{"title":"Pre-diagnosis of pelvic floor disorders-based image registration and clustering","authors":"Cicero L. Costa, Túlia A. A. Macedo, C. Barcelos","doi":"10.1109/ICAR46387.2019.8981575","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981575","url":null,"abstract":"Pelvic dysfunction mainly affects adult women, it is estimated that 15% of multiparous women suffer from the problem. Dysfunctions can be diagnosed by defecography, a dynamic MRI scan. Images are used by specialists to diagnose organ dysfunction such as the bladder and the early rectum. This paper presents an automated classification system that uses a non-rigid registration based on a variational model to create automatic markings from initial markings made by an expert. The classification is based on simple average and the centroids of the K-means grouping technique. The classification made by the system is evaluated by confusion matrix based metrics. The obtained results using 21 defecography exams from 21 different patients indicate that the proposed technique is a promising tool in the diagnosis of pelvic floor disorders and can assist the physician in the diagnostic process.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"25 1","pages":"572-577"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82230441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981632
Amy Deeb, M. Seto, Yajun Pan
Navigation in dynamic environments is a challenge for autonomous vehicles operating without prior maps or global position references. This poses high risk to vehicles that perform scientific studies and monitoring missions in marine Arctic environments characterized by slowly moving sea ice with few truly static landmarks. Whereas mature simultaneous localization and mapping (SLAM) approaches assume a static environment, this work extends pose graph SLAM to spatiotemporally evolving environments. A novel model-based dynamic factor is proposed to capture a landmark's state transition model - whether the state be kinematic, appearance or otherwise. The structure of the state transition model is assumed to be known a priori, while the parameters are estimated on-line. Expectation maximization is used to avoid adding variables to the graph. Proof-of-concept results are shown in small- and medium-scale simulation, and small-scale laboratory environments for a small quadrotor. Preliminary laboratory validation results shows the effect of mechanical limitations of the quadrotor platform and increased uncertainties associated with the model-based dynamic factors on the SLAM estimate. Simulation results are encouraging for the application of model-based dynamic factors to dynamic landmarks with a constant-velocity kinematic model.
{"title":"Model-based Dynamic Pose Graph SLAM in Unstructured Dynamic Environments","authors":"Amy Deeb, M. Seto, Yajun Pan","doi":"10.1109/ICAR46387.2019.8981632","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981632","url":null,"abstract":"Navigation in dynamic environments is a challenge for autonomous vehicles operating without prior maps or global position references. This poses high risk to vehicles that perform scientific studies and monitoring missions in marine Arctic environments characterized by slowly moving sea ice with few truly static landmarks. Whereas mature simultaneous localization and mapping (SLAM) approaches assume a static environment, this work extends pose graph SLAM to spatiotemporally evolving environments. A novel model-based dynamic factor is proposed to capture a landmark's state transition model - whether the state be kinematic, appearance or otherwise. The structure of the state transition model is assumed to be known a priori, while the parameters are estimated on-line. Expectation maximization is used to avoid adding variables to the graph. Proof-of-concept results are shown in small- and medium-scale simulation, and small-scale laboratory environments for a small quadrotor. Preliminary laboratory validation results shows the effect of mechanical limitations of the quadrotor platform and increased uncertainties associated with the model-based dynamic factors on the SLAM estimate. Simulation results are encouraging for the application of model-based dynamic factors to dynamic landmarks with a constant-velocity kinematic model.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"78 1","pages":"123-128"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84967194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981627
Héctor Azpúrua, Filipe A. S. Rocha, G. Garcia, Alexandre Souza Santos, Eduardo Cota, Luiz Guilherme Barros, Alexandre S Thiago, G. Pessin, G. Freitas
Robotic devices operating in confined environments such as underground, tunnel systems, and cave networks are currently receiving particular attention from research and industry. One example is the Brazilian mining company Vale S.A., which is employing a robot - Espeleolcobô - to access restricted areas. The robot was designed initially to inspect natural caves during teleoperated missions and is now being used to monitor dam galleries and other confined or dangerous environments. This paper describes recent developments regarding locomotion mechanisms and mobility analyses, localization, and path planning strategies aiming autonomous robot operation. Preliminary results from simulation and field experiments validate the robotic device concept. Lessons learned from multiple field tests in industrial scenarios are also described.
{"title":"EspeleoRobô - a robotic device to inspect confined environments","authors":"Héctor Azpúrua, Filipe A. S. Rocha, G. Garcia, Alexandre Souza Santos, Eduardo Cota, Luiz Guilherme Barros, Alexandre S Thiago, G. Pessin, G. Freitas","doi":"10.1109/ICAR46387.2019.8981627","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981627","url":null,"abstract":"Robotic devices operating in confined environments such as underground, tunnel systems, and cave networks are currently receiving particular attention from research and industry. One example is the Brazilian mining company Vale S.A., which is employing a robot - Espeleolcobô - to access restricted areas. The robot was designed initially to inspect natural caves during teleoperated missions and is now being used to monitor dam galleries and other confined or dangerous environments. This paper describes recent developments regarding locomotion mechanisms and mobility analyses, localization, and path planning strategies aiming autonomous robot operation. Preliminary results from simulation and field experiments validate the robotic device concept. Lessons learned from multiple field tests in industrial scenarios are also described.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"75 1","pages":"17-23"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83380279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981653
Edison Kleiber Titito Concha, Diego Pittol, Ricardo Westhauser, M. Kolberg, R. Maffei, Edson Prestes e Silva
Keyframe-based monocular SLAM (Simultaneous Localization and Mapping) is one of the main visual SLAM approaches, used to estimate the camera motion together with the map reconstruction over selected frames. These techniques represent the environment by map points located in the three-dimensional space, that can be recognized and located in the frame. However, these techniques usually cannot decide when a map point is an outlier or obsolete information and can be discarded. Another problem is to decide when combining map points corresponding to the same three-dimensional point. In this paper, we present a robust method to maintain a refined map. This approach uses the covisibility graph and an algorithm based on information fusion to build a probabilistic map, that explicitly models outlier measurements. In addition, we incorporate a pruning mechanism to reduce redundant information and remove outliers. In this way, our approach manages to reduce the map size maintaining essential information of the environment. Finally, in order to evaluate the performance of our method, we incorporate it into an ORB-SLAM system and measure the accuracy achieved on publicly available benchmark datasets which contain indoor images sequences recorded with a hand-held monocular camera.
基于关键帧的单眼SLAM (Simultaneous Localization and Mapping)是主要的视觉SLAM方法之一,用于估计摄像机运动并在选定帧上重建地图。这些技术通过位于三维空间中的地图点来表示环境,这些点可以在框架中被识别和定位。然而,这些技术通常不能确定一个地图点何时是一个异常值或过时的信息,可以丢弃。另一个问题是决定何时组合对应于同一三维点的地图点。在本文中,我们提出了一种鲁棒的方法来维护一个精细化的映射。该方法使用共可见度图和基于信息融合的算法来构建概率图,明确地对离群值进行建模。此外,我们还采用了修剪机制来减少冗余信息和去除异常值。通过这种方式,我们的方法设法减小了地图尺寸,同时保持了环境的基本信息。最后,为了评估我们的方法的性能,我们将其纳入ORB-SLAM系统,并测量了在公开可用的基准数据集上实现的精度,这些基准数据集包含用手持单目相机记录的室内图像序列。
{"title":"Map Point Optimization in Keyframe-Based SLAM using Covisibility Graph and Information Fusion","authors":"Edison Kleiber Titito Concha, Diego Pittol, Ricardo Westhauser, M. Kolberg, R. Maffei, Edson Prestes e Silva","doi":"10.1109/ICAR46387.2019.8981653","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981653","url":null,"abstract":"Keyframe-based monocular SLAM (Simultaneous Localization and Mapping) is one of the main visual SLAM approaches, used to estimate the camera motion together with the map reconstruction over selected frames. These techniques represent the environment by map points located in the three-dimensional space, that can be recognized and located in the frame. However, these techniques usually cannot decide when a map point is an outlier or obsolete information and can be discarded. Another problem is to decide when combining map points corresponding to the same three-dimensional point. In this paper, we present a robust method to maintain a refined map. This approach uses the covisibility graph and an algorithm based on information fusion to build a probabilistic map, that explicitly models outlier measurements. In addition, we incorporate a pruning mechanism to reduce redundant information and remove outliers. In this way, our approach manages to reduce the map size maintaining essential information of the environment. Finally, in order to evaluate the performance of our method, we incorporate it into an ORB-SLAM system and measure the accuracy achieved on publicly available benchmark datasets which contain indoor images sequences recorded with a hand-held monocular camera.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"24 1","pages":"129-134"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88606538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981628
N. Dhanaraj, Julietta Maffeo, G. Pereira, Jason N. Gross, Yu Gu, Nathan Hewitt, Casey Edmonds-Estes, Rachel Jarman, Jeongwoo Seo, Henry Gunner, Alexandra Hatfield, Tucker Johnson, Lunet Yifru
This paper describes the Adaptable Platform for Interactive Swarm robotics (APIS) - a testbed designed to accelerate development in human-swarm interaction (HSI) research. Specifically, this paper presents the design of a swarm robot platform composed of fifty low cost robots coupled with a testing field and a software architecture that allows for modular and versatile development of swarm algorithms. The motivation behind developing this platform is that the emergence of a swarm's collective behavior can be difficult to predict and control. However, human-swarm interaction can measurably increase a swarm's performance as the human operator may have intuition or knowledge unavailable to the swarm. The development of APIS allows researchers to focus on HSI research, without being constrained to a fixed ruleset or interface. A short survey is presented that offers a taxonomy of swarm platforms and provides conclusions that contextualize the development of APIS. Next, the motivations, design and functionality of the APIS testbed are described. Finally, the operation and potential of the platform are demonstrated through two experimental evaluations.
{"title":"Adaptable Platform for Interactive Swarm Robotics (APIS): A Human-Swarm Interaction Research Testbed","authors":"N. Dhanaraj, Julietta Maffeo, G. Pereira, Jason N. Gross, Yu Gu, Nathan Hewitt, Casey Edmonds-Estes, Rachel Jarman, Jeongwoo Seo, Henry Gunner, Alexandra Hatfield, Tucker Johnson, Lunet Yifru","doi":"10.1109/ICAR46387.2019.8981628","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981628","url":null,"abstract":"This paper describes the Adaptable Platform for Interactive Swarm robotics (APIS) - a testbed designed to accelerate development in human-swarm interaction (HSI) research. Specifically, this paper presents the design of a swarm robot platform composed of fifty low cost robots coupled with a testing field and a software architecture that allows for modular and versatile development of swarm algorithms. The motivation behind developing this platform is that the emergence of a swarm's collective behavior can be difficult to predict and control. However, human-swarm interaction can measurably increase a swarm's performance as the human operator may have intuition or knowledge unavailable to the swarm. The development of APIS allows researchers to focus on HSI research, without being constrained to a fixed ruleset or interface. A short survey is presented that offers a taxonomy of swarm platforms and provides conclusions that contextualize the development of APIS. Next, the motivations, design and functionality of the APIS testbed are described. Finally, the operation and potential of the platform are demonstrated through two experimental evaluations.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"31 1","pages":"720-726"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75015681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981651
E. G. Ribeiro, V. Grassi
The development of the robotics field has not yet allowed robots to execute, with dexterity, simple actions performed by humans. One of them is the grasping of objects by robotic manipulators. Aiming to explore the use of deep learning algorithms, specifically Convolutional Neural Networks (CNN), to approach the robotic grasping problem, this work addresses the visual perception phase involved in the task. To achieve this goal, the dataset “Cornell Grasp” was used to train a CNN capable of predicting the most suitable place to grasp the object. It does this by obtaining a grasping rectangle that symbolizes the position, orientation, and opening of the robot's parallel grippers just before the grippers are closed. The proposed system works in real-time due to the small number of network parameters. This is possible by means of the data augmentation strategy used. The efficiency of the detection is in accordance with the state of the art and the speed of prediction, to the best of our knowledge, is the highest in the literature.
{"title":"Fast Convolutional Neural Network for Real-Time Robotic Grasp Detection","authors":"E. G. Ribeiro, V. Grassi","doi":"10.1109/ICAR46387.2019.8981651","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981651","url":null,"abstract":"The development of the robotics field has not yet allowed robots to execute, with dexterity, simple actions performed by humans. One of them is the grasping of objects by robotic manipulators. Aiming to explore the use of deep learning algorithms, specifically Convolutional Neural Networks (CNN), to approach the robotic grasping problem, this work addresses the visual perception phase involved in the task. To achieve this goal, the dataset “Cornell Grasp” was used to train a CNN capable of predicting the most suitable place to grasp the object. It does this by obtaining a grasping rectangle that symbolizes the position, orientation, and opening of the robot's parallel grippers just before the grippers are closed. The proposed system works in real-time due to the small number of network parameters. This is possible by means of the data augmentation strategy used. The efficiency of the detection is in accordance with the state of the art and the speed of prediction, to the best of our knowledge, is the highest in the literature.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"129 1","pages":"49-54"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75266096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981550
D. Macharet, A. A. Neto
The employment of autonomous agents for persistent monitoring tasks has significantly increased in recent years. In this sense, the data collection process must take into account limited resources, such as time and energy, whilst acquiring a sufficient amount of data to generate accurate models of underlying phenomena. Many different schedulers in the literature act in an off-line manner, which means they define the sequence of visit and generate a set of paths before any observations are made. However, on-line approaches can adapt their behavior based on previously collected data, allowing to obtain more precise models. In this paper, we propose an on-line scheduler which evaluates the sampling rate of the signals being measured to assign different priorities to different Points-of-Interest (PoIs). Next, according to this priority, it is determined if a region must be visited more or less frequently to increase our knowledge of the phenomenon. Our methodology was evaluated through several experiments in a simulated environment.
{"title":"Multi-robot On-line Sampling Scheduler for Persistent Monitoring","authors":"D. Macharet, A. A. Neto","doi":"10.1109/ICAR46387.2019.8981550","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981550","url":null,"abstract":"The employment of autonomous agents for persistent monitoring tasks has significantly increased in recent years. In this sense, the data collection process must take into account limited resources, such as time and energy, whilst acquiring a sufficient amount of data to generate accurate models of underlying phenomena. Many different schedulers in the literature act in an off-line manner, which means they define the sequence of visit and generate a set of paths before any observations are made. However, on-line approaches can adapt their behavior based on previously collected data, allowing to obtain more precise models. In this paper, we propose an on-line scheduler which evaluates the sampling rate of the signals being measured to assign different priorities to different Points-of-Interest (PoIs). Next, according to this priority, it is determined if a region must be visited more or less frequently to increase our knowledge of the phenomenon. Our methodology was evaluated through several experiments in a simulated environment.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"44 1","pages":"617-622"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86404984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981636
J. C. Vendrichoski, T. L. Costa, E. S. Elyoussef, E. D. Pieri
With a wide spectrum of applications ranging from entertainment to military use, quadrotors, in their conventional construction with four fixed rotors, have proven to be sufficiently capable of robustly performing numerous tasks. However, the capability to generate thrust just in one direction, i.e., normal to its main plane, is very restrictive and drastically reduces the maneuverability and agility of the vehicle. In this paper, an alternative model of the quadrotor is presented. Constructively, the difference lies in the addition of a mechanism that tilts the rotors in the longitudinal direction, which in practice adds maneuverability by enabling the longitudinal translation uncoupled from the pitching movement. In addition to this new thrust component in the longitudinal direction, this configuration also yields a significant increase in the yaw torque. These are highly desired features in a UAV used to execute tasks that require physical interaction with the surrounding environment. The mathematical model of the entire system is obtained by employing the Euler-Lagrange formalism and a multi-body approach. In addition, a basic control scheme is used to verify, through simulation, the obtained model.
{"title":"Mathematical modeling and control of a quadrotor aerial vehicle with tiltrotors aimed for interaction tasks","authors":"J. C. Vendrichoski, T. L. Costa, E. S. Elyoussef, E. D. Pieri","doi":"10.1109/ICAR46387.2019.8981636","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981636","url":null,"abstract":"With a wide spectrum of applications ranging from entertainment to military use, quadrotors, in their conventional construction with four fixed rotors, have proven to be sufficiently capable of robustly performing numerous tasks. However, the capability to generate thrust just in one direction, i.e., normal to its main plane, is very restrictive and drastically reduces the maneuverability and agility of the vehicle. In this paper, an alternative model of the quadrotor is presented. Constructively, the difference lies in the addition of a mechanism that tilts the rotors in the longitudinal direction, which in practice adds maneuverability by enabling the longitudinal translation uncoupled from the pitching movement. In addition to this new thrust component in the longitudinal direction, this configuration also yields a significant increase in the yaw torque. These are highly desired features in a UAV used to execute tasks that require physical interaction with the surrounding environment. The mathematical model of the entire system is obtained by employing the Euler-Lagrange formalism and a multi-body approach. In addition, a basic control scheme is used to verify, through simulation, the obtained model.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"38 1","pages":"161-166"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79388832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981567
Valentim Ernandes Neto, M. S. Filho, A. Brandão
This work discusses the implementation and experimentation of a control system designed to guide a multi-robot formation of one aerial and three ground robots, in a trajectory tracking task. The implementation considers three partial formations of two robots, each one associated to a virtual structure, a straight line in the 3D space with its own formation controller. Two of the partial line formations are homogeneous formations of two ground vehicles, and the third one is a heterogeneous formation, involving a ground and an aerial vehicles. All the partial formations have the same reference for position, which is one of the ground vehicles. This also allows to discuss what would happen if this ground vehicle, the reference for all the line formations, and thus for the whole formation, presents a failure, what is considered here, in connection to an escorting mission. The result is that the designed control system is effectively able to accomplish the mission, even upon a failure of the reference robot, what is validated by experimental results also discussed here.
{"title":"Null Space-Based Formation Control with Leader Change Possibility","authors":"Valentim Ernandes Neto, M. S. Filho, A. Brandão","doi":"10.1109/ICAR46387.2019.8981567","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981567","url":null,"abstract":"This work discusses the implementation and experimentation of a control system designed to guide a multi-robot formation of one aerial and three ground robots, in a trajectory tracking task. The implementation considers three partial formations of two robots, each one associated to a virtual structure, a straight line in the 3D space with its own formation controller. Two of the partial line formations are homogeneous formations of two ground vehicles, and the third one is a heterogeneous formation, involving a ground and an aerial vehicles. All the partial formations have the same reference for position, which is one of the ground vehicles. This also allows to discuss what would happen if this ground vehicle, the reference for all the line formations, and thus for the whole formation, presents a failure, what is considered here, in connection to an escorting mission. The result is that the designed control system is effectively able to accomplish the mission, even upon a failure of the reference robot, what is validated by experimental results also discussed here.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"106 1","pages":"266-271"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85382533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/ICAR46387.2019.8981620
P. Cornelissen, M. Ourak, G. Borghesan, D. Reynaerts, E. V. Poorten
Depth perception remains a key challenge in vitreoretinal surgery. Currently users only have the view from an overhead stereo-microscope available, but this means of visualisation is quite restrictive. Optical Coherence Tomography (OCT) has been introduced in the past and is even integrated in a number of commercial systems to provide a more detailed depth vision, even showing subsurface structures up to a few millimeters below the surface. The intra-operative use of OCT, especially in combination with robotics, is still sub-optimal. At present one can get either a very slowly updating large volume scan (C-scan) or a faster but not aligned cross-sectional B-scan or an even more local single point A-scan at very high update rate. In this work we propose a model-mediated approach. As the posterior eye segment can be approximated as a sphere, we propose to model the retina with this simplified sphere model, the center and radius of which can be estimated in real time. A time-varying Kalman filter is proposed here in combination with an instrument-integrated optical fiber that provides high-frequency A-scans along the longitudinal instrument direction. The model and convergence of the model has been validated first in a simulation environment and subsequently in-silico using an OCT A-scan probe mounted on a co-manipulated vitreoretinal robotic system. The probe was manipulated to measure a 3D stereo lithographically printed spherical model of the posterior eye segment. The feasibility of the proposed method was demonstrated in various scenarios. For the in-silico validation a 20 micrometer error and convergence speed of 2.0 seconds was found when sampling A-scans at 200Hz.
{"title":"Towards Real-time Estimation of a Spherical Eye Model based on a Single Fiber OCT","authors":"P. Cornelissen, M. Ourak, G. Borghesan, D. Reynaerts, E. V. Poorten","doi":"10.1109/ICAR46387.2019.8981620","DOIUrl":"https://doi.org/10.1109/ICAR46387.2019.8981620","url":null,"abstract":"Depth perception remains a key challenge in vitreoretinal surgery. Currently users only have the view from an overhead stereo-microscope available, but this means of visualisation is quite restrictive. Optical Coherence Tomography (OCT) has been introduced in the past and is even integrated in a number of commercial systems to provide a more detailed depth vision, even showing subsurface structures up to a few millimeters below the surface. The intra-operative use of OCT, especially in combination with robotics, is still sub-optimal. At present one can get either a very slowly updating large volume scan (C-scan) or a faster but not aligned cross-sectional B-scan or an even more local single point A-scan at very high update rate. In this work we propose a model-mediated approach. As the posterior eye segment can be approximated as a sphere, we propose to model the retina with this simplified sphere model, the center and radius of which can be estimated in real time. A time-varying Kalman filter is proposed here in combination with an instrument-integrated optical fiber that provides high-frequency A-scans along the longitudinal instrument direction. The model and convergence of the model has been validated first in a simulation environment and subsequently in-silico using an OCT A-scan probe mounted on a co-manipulated vitreoretinal robotic system. The probe was manipulated to measure a 3D stereo lithographically printed spherical model of the posterior eye segment. The feasibility of the proposed method was demonstrated in various scenarios. For the in-silico validation a 20 micrometer error and convergence speed of 2.0 seconds was found when sampling A-scans at 200Hz.","PeriodicalId":6606,"journal":{"name":"2019 19th International Conference on Advanced Robotics (ICAR)","volume":"22 1","pages":"666-672"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87387232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}