Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594215
Pedro Outeiro, C. Cardeira, P. Oliveira
This paper presents a methodology for height control of a quadrotor that transports a constant unknown load, given the estimates on both weight and state variables, based on measurements from motion sensors installed on-board. The proposed control and estimation framework is a Multi-Model Adaptive Controller using LQR with integrative action and Kalman filter with integrative component. The control system obtained is validated both in simulation and experimentally, resorting to an off-the-shelf commercially available quadrotor equipped with an IMU, an ultrasound height sensor, and a barometer, among other sensors.
{"title":"MMAC Height Control System of a Quadrotor for Constant Unknown Load Transportation","authors":"Pedro Outeiro, C. Cardeira, P. Oliveira","doi":"10.1109/IROS.2018.8594215","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594215","url":null,"abstract":"This paper presents a methodology for height control of a quadrotor that transports a constant unknown load, given the estimates on both weight and state variables, based on measurements from motion sensors installed on-board. The proposed control and estimation framework is a Multi-Model Adaptive Controller using LQR with integrative action and Kalman filter with integrative component. The control system obtained is validated both in simulation and experimentally, resorting to an off-the-shelf commercially available quadrotor equipped with an IMU, an ultrasound height sensor, and a barometer, among other sensors.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"8 1","pages":"4192-4197"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88805610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594234
Bing-Jui Ho, Paloma Sodhi, P. Teixeira, Ming Hsiao, Tushar Kusnur, M. Kaess
In this paper, we propose a mapping approach that constructs a globally deformable virtual occupancy grid map (VOG-map) based on local submaps. Such a representation allows pose graph SLAM systems to correct globally accumulated drift via loop closures while maintaining free space information for the purpose of path planning. We demonstrate use of such a representation for implementing an underwater SLAM system in which the robot actively plans paths to generate accurate 3D scene reconstructions. We evaluate performance on simulated as well as real-world experiments. Our work furthers capabilities of mobile robots actively mapping and exploring unstructured, three dimensional environments.
{"title":"Virtual Occupancy Grid Map for Submap-based Pose Graph SLAM and Planning in 3D Environments","authors":"Bing-Jui Ho, Paloma Sodhi, P. Teixeira, Ming Hsiao, Tushar Kusnur, M. Kaess","doi":"10.1109/IROS.2018.8594234","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594234","url":null,"abstract":"In this paper, we propose a mapping approach that constructs a globally deformable virtual occupancy grid map (VOG-map) based on local submaps. Such a representation allows pose graph SLAM systems to correct globally accumulated drift via loop closures while maintaining free space information for the purpose of path planning. We demonstrate use of such a representation for implementing an underwater SLAM system in which the robot actively plans paths to generate accurate 3D scene reconstructions. We evaluate performance on simulated as well as real-world experiments. Our work furthers capabilities of mobile robots actively mapping and exploring unstructured, three dimensional environments.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"5 1","pages":"2175-2182"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89311981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594233
Lars Grimstad, Remy Zakaria, Tuan-Dung Le, P. From
This paper presents a novel agricultural robot for greenhouse applications. In many greenhouses, including the greenhouse used in this work, sets of pipes run along the floor between plant rows. These pipes are components of the greenhouse heating system, and doubles as rails for trolleys used by workers. A flat surface separates the start of each rail set at the greenhouse headland. If a robot is to autonomously drive along plant rows, and also be able to move from one set of rails to the next, it must be able to locomote both on rails and on flat surfaces. This puts requirements on mechanical design and navigation, as the robot must cope with two very different operational environments. The robot presented in this paper has been designed to overcome these challenges and allows for autonomous operation both in open environments and on rails by using only low-cost sensors. The robot is assembled using a modular system created by the authors and tested in a greenhouse during ordinary operation. Using the robot, we map the environment and automatically determine the starting point of each rail in the map. We also show how we are able to identify rails and estimate the robots pose relative to theses using only a low-cost 3D camera. When a rail is located, the robot makes the transition from floor to rail and travels along the row of plants before it moves to the next rail set which it has identified in the map. The robot is used for UV treatment of cucumber plants.
{"title":"A Novel Autonomous Robot for Greenhouse Applications","authors":"Lars Grimstad, Remy Zakaria, Tuan-Dung Le, P. From","doi":"10.1109/IROS.2018.8594233","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594233","url":null,"abstract":"This paper presents a novel agricultural robot for greenhouse applications. In many greenhouses, including the greenhouse used in this work, sets of pipes run along the floor between plant rows. These pipes are components of the greenhouse heating system, and doubles as rails for trolleys used by workers. A flat surface separates the start of each rail set at the greenhouse headland. If a robot is to autonomously drive along plant rows, and also be able to move from one set of rails to the next, it must be able to locomote both on rails and on flat surfaces. This puts requirements on mechanical design and navigation, as the robot must cope with two very different operational environments. The robot presented in this paper has been designed to overcome these challenges and allows for autonomous operation both in open environments and on rails by using only low-cost sensors. The robot is assembled using a modular system created by the authors and tested in a greenhouse during ordinary operation. Using the robot, we map the environment and automatically determine the starting point of each rail in the map. We also show how we are able to identify rails and estimate the robots pose relative to theses using only a low-cost 3D camera. When a rail is located, the robot makes the transition from floor to rail and travels along the row of plants before it moves to the next rail set which it has identified in the map. The robot is used for UV treatment of cucumber plants.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"19 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87291253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594290
H. Saeidi, J. Opfermann, M. Kam, S. Raghunathan, S. Léonard, A. Krieger
Autonomous robotic assisted surgery (RAS) systems aim to reduce human errors and improve patient outcomes leveraging robotic accuracy and repeatability during surgical procedures. However, full automation of RAS in complex surgical environments is still not feasible and collaboration with the surgeon is required for safe and effective use. In this work, we utilize our Smart Tissue Autonomous Robot (STAR) to develop and evaluate a shared control strategy for the collaboration of the robot with a human operator in surgical scenarios. We consider 2D pattern cutting tasks with partial blood occlusion of the cutting pattern using a robotic electrocautery tool. For this surgical task and RAS system, we i) develop a confidence-based shared control strategy, ii) assess the pattern tracking performances of manual and autonomous controls and identify the confidence models for human and robot as well as a confidence-based control allocation function, and iii) experimentally evaluate the accuracy of our proposed shared control strategy. In our experiments on porcine fat samples, by combining the best elements of autonomous robot controller with complementary skills of a human operator, our proposed control strategy improved the cutting accuracy by 6.4%, while reducing the operator work time to 44% compared to a pure manual control.
{"title":"A Confidence-Based Shared Control Strategy for the Smart Tissue Autonomous Robot (STAR)","authors":"H. Saeidi, J. Opfermann, M. Kam, S. Raghunathan, S. Léonard, A. Krieger","doi":"10.1109/IROS.2018.8594290","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594290","url":null,"abstract":"Autonomous robotic assisted surgery (RAS) systems aim to reduce human errors and improve patient outcomes leveraging robotic accuracy and repeatability during surgical procedures. However, full automation of RAS in complex surgical environments is still not feasible and collaboration with the surgeon is required for safe and effective use. In this work, we utilize our Smart Tissue Autonomous Robot (STAR) to develop and evaluate a shared control strategy for the collaboration of the robot with a human operator in surgical scenarios. We consider 2D pattern cutting tasks with partial blood occlusion of the cutting pattern using a robotic electrocautery tool. For this surgical task and RAS system, we i) develop a confidence-based shared control strategy, ii) assess the pattern tracking performances of manual and autonomous controls and identify the confidence models for human and robot as well as a confidence-based control allocation function, and iii) experimentally evaluate the accuracy of our proposed shared control strategy. In our experiments on porcine fat samples, by combining the best elements of autonomous robot controller with complementary skills of a human operator, our proposed control strategy improved the cutting accuracy by 6.4%, while reducing the operator work time to 44% compared to a pure manual control.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"42 1","pages":"1268-1275"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90573001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594479
Dimitrios Papageorgiou, Antonis Sidiropoulos, Z. Doulgeri
This work proposes the utilization of sinc functions as kernels of Dynamic Movement Primitives (DMP) models for encoding point-to-point kinematic behaviors. The proposed method presents a number of advantages with respect to the state of the art, as it (i) involves a simple learning technique, (ii) provides a method to determine the minimum required number of basis functions, based on the frequency content of the demonstrated motion and (iii) provides the ability to pre-define the reproduction accuracy of the learned behavior. The ability of the proposed model to accurately reproduce the behavior is demonstrated through simulations and experiments. Comparisons with the Gaussian-based DMP model show the proposed method's superiority in terms of computational complexity of learning and accuracy for a specific number of kernels.
{"title":"Sinc-Based Dynamic Movement Primitives for Encoding Point-to-point Kinematic Behaviors","authors":"Dimitrios Papageorgiou, Antonis Sidiropoulos, Z. Doulgeri","doi":"10.1109/IROS.2018.8594479","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594479","url":null,"abstract":"This work proposes the utilization of sinc functions as kernels of Dynamic Movement Primitives (DMP) models for encoding point-to-point kinematic behaviors. The proposed method presents a number of advantages with respect to the state of the art, as it (i) involves a simple learning technique, (ii) provides a method to determine the minimum required number of basis functions, based on the frequency content of the demonstrated motion and (iii) provides the ability to pre-define the reproduction accuracy of the learned behavior. The ability of the proposed model to accurately reproduce the behavior is demonstrated through simulations and experiments. Comparisons with the Gaussian-based DMP model show the proposed method's superiority in terms of computational complexity of learning and accuracy for a specific number of kernels.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"40 1","pages":"8339-8345"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88157663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594052
Devesh K. Jha
In this paper, we present algorithms for synthesizing controllers to distribute a swarm of homogeneous robots (agents) over heterogeneous tasks which are operated in parallel. Swarm is modeled as a homogeneous collection of irreducible Markov chains. States of the Markov chain represent the tasks performed by the swarm. The target state is a pre-defined distribution of agents over the states of the Markov chain (and thus the tasks). We make use of ergodicity property of irreducible Markov chains to ensure that as an individual agent converges to the desired behavior in time, the swarm converges to the target state. To circumvent the problems faced by a global controller and local/decentralized controllers alone, we design a controller by combining global supervision with local-feedback-based state level decisions. Some numerical experiments are shown to illustrate the performance of the proposed algorithms.
{"title":"Algorithms for Task Allocation in Homogeneous Swarm of Robots","authors":"Devesh K. Jha","doi":"10.1109/IROS.2018.8594052","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594052","url":null,"abstract":"In this paper, we present algorithms for synthesizing controllers to distribute a swarm of homogeneous robots (agents) over heterogeneous tasks which are operated in parallel. Swarm is modeled as a homogeneous collection of irreducible Markov chains. States of the Markov chain represent the tasks performed by the swarm. The target state is a pre-defined distribution of agents over the states of the Markov chain (and thus the tasks). We make use of ergodicity property of irreducible Markov chains to ensure that as an individual agent converges to the desired behavior in time, the swarm converges to the target state. To circumvent the problems faced by a global controller and local/decentralized controllers alone, we design a controller by combining global supervision with local-feedback-based state level decisions. Some numerical experiments are shown to illustrate the performance of the proposed algorithms.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"3771-3776"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85346544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593800
V. Tankovich, Michael Schoenberg, S. Fanello, Adarsh Kowdle, Christoph Rhemann, Maksym Dzitsiuk, Mirko Schmidt, Julien P. C. Valentin, S. Izadi
Depth cameras have accelerated research in many areas of computer vision. Most triangulation-based depth cameras, whether structured light systems like the Kinect or active (assisted) stereo systems, are based on the principle of stereo matching. Depth from stereo is an active research topic dating back 30 years. Despite recent advances, algorithms usually trade-off accuracy for speed. In particular, efficient methods rely on fronto-parallel assumptions to reduce the search space and keep computation low. We present SOS (Slanted O(1) Stereo), the first algorithm capable of leveraging slanted support windows without sacrificing speed or accuracy. We use an active stereo configuration, where an illuminator textures the scene. Under this setting, local methods - such as PatchMatch Stereo - obtain state of the art results by jointly estimating disparities and slant, but at a large computational cost. We observe that these methods typically exploit local smoothness to simplify their initialization strategies. Our key insight is that local smoothness can in fact be used to amortize the computation not only within initialization, but across the entire stereo pipeline. Building on these insights, we propose a novel hierarchical initialization that is able to efficiently perform search over disparity and slants. We then show how this structure can be leveraged to provide high quality depth maps. Extensive quantitative evaluations demonstrate that the proposed technique yields significantly more precise results than current state of the art, but at a fraction of the computational cost. Our prototype implementation runs at 4000 fps on modern GPU architectures.
{"title":"SOS: Stereo Matching in O(1) with Slanted Support Windows","authors":"V. Tankovich, Michael Schoenberg, S. Fanello, Adarsh Kowdle, Christoph Rhemann, Maksym Dzitsiuk, Mirko Schmidt, Julien P. C. Valentin, S. Izadi","doi":"10.1109/IROS.2018.8593800","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593800","url":null,"abstract":"Depth cameras have accelerated research in many areas of computer vision. Most triangulation-based depth cameras, whether structured light systems like the Kinect or active (assisted) stereo systems, are based on the principle of stereo matching. Depth from stereo is an active research topic dating back 30 years. Despite recent advances, algorithms usually trade-off accuracy for speed. In particular, efficient methods rely on fronto-parallel assumptions to reduce the search space and keep computation low. We present SOS (Slanted O(1) Stereo), the first algorithm capable of leveraging slanted support windows without sacrificing speed or accuracy. We use an active stereo configuration, where an illuminator textures the scene. Under this setting, local methods - such as PatchMatch Stereo - obtain state of the art results by jointly estimating disparities and slant, but at a large computational cost. We observe that these methods typically exploit local smoothness to simplify their initialization strategies. Our key insight is that local smoothness can in fact be used to amortize the computation not only within initialization, but across the entire stereo pipeline. Building on these insights, we propose a novel hierarchical initialization that is able to efficiently perform search over disparity and slants. We then show how this structure can be leveraged to provide high quality depth maps. Extensive quantitative evaluations demonstrate that the proposed technique yields significantly more precise results than current state of the art, but at a fraction of the computational cost. Our prototype implementation runs at 4000 fps on modern GPU architectures.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"9 1","pages":"6782-6789"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85364966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593415
Manuel Keppler, Dominic Lakatos, C. Ott, A. Albu-Schäffer
We present a new approach for Cartesian impedance control of compliantly actuated robots with possibly nonlinear spring characteristics. It reveals a remarkable stiffness and damping range in the experimental evaluation. The most interesting contribution, is the way the desired closed-loop dynamics is designed. Our control concept allows to add a desired stiffness and damping directly on the end-effector, while leaving the system structure intact. The intrinsic inertial and elastic properties of the system are preserved. This is achieved by introducing new motor coordinates that reflect the desired spring and damper terms. Theoretically, by means of additional motor inertia shaping it is possible to make the end-effector interaction behavior with respect to external loads approach, arbitrarily close, the interaction behavior that is achievable by classical Cartesian impedance control on rigid robots. The physically motivated design approach allows for an intuitive understanding of the resulting closed-loop dynamics. We perform a passivity and stability analysis on the basis of al physically motivated storage and Lyapunov function.
{"title":"Elastic Structure Preserving Impedance (ESπ)Control for Compliantly Actuated Robots","authors":"Manuel Keppler, Dominic Lakatos, C. Ott, A. Albu-Schäffer","doi":"10.1109/IROS.2018.8593415","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593415","url":null,"abstract":"We present a new approach for Cartesian impedance control of compliantly actuated robots with possibly nonlinear spring characteristics. It reveals a remarkable stiffness and damping range in the experimental evaluation. The most interesting contribution, is the way the desired closed-loop dynamics is designed. Our control concept allows to add a desired stiffness and damping directly on the end-effector, while leaving the system structure intact. The intrinsic inertial and elastic properties of the system are preserved. This is achieved by introducing new motor coordinates that reflect the desired spring and damper terms. Theoretically, by means of additional motor inertia shaping it is possible to make the end-effector interaction behavior with respect to external loads approach, arbitrarily close, the interaction behavior that is achievable by classical Cartesian impedance control on rigid robots. The physically motivated design approach allows for an intuitive understanding of the resulting closed-loop dynamics. We perform a passivity and stability analysis on the basis of al physically motivated storage and Lyapunov function.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"3 1","pages":"5861-5868"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88457783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594032
Benjamin W. McInroe, Carolyn L. Chen, Ken Goldberg, R. Bajcsy, R. Fearing
Soft material robots are attractive for safe interaction with humans and unstructured environments due to their compliance and low intrinsic stiffness and mass. These properties enable new capabilities such as the ability to conform to environmental geometry for tactile sensing and to undergo large shape changes for actuation. Due to the complex coupling between sensing and actuation in high-dimensional nonlinear soft systems, prior work in soft robotics has primarily focused on either sensing or actuation. This paper presents SOFTcell, a novel controllable stiffness tactile device that incorporates both optical sensing and pneumatic actuation. We report details on the device's design and implementation and analyze results from characterization experiments on sensitivity and performance, which show that SOFTcell can controllably increase its effective modulus from 4.4kPa to 46.1kPa. Additionally, we demonstrate the utility of SOFTcell for grasping in a reactive control task in which tactile data is used to detect fingertip shear as a grasped object slips, and cell pressurization is used to prevent the slip without the need to adjust fingertip position.
{"title":"Towards a Soft Fingertip with Integrated Sensing and Actuation","authors":"Benjamin W. McInroe, Carolyn L. Chen, Ken Goldberg, R. Bajcsy, R. Fearing","doi":"10.1109/IROS.2018.8594032","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594032","url":null,"abstract":"Soft material robots are attractive for safe interaction with humans and unstructured environments due to their compliance and low intrinsic stiffness and mass. These properties enable new capabilities such as the ability to conform to environmental geometry for tactile sensing and to undergo large shape changes for actuation. Due to the complex coupling between sensing and actuation in high-dimensional nonlinear soft systems, prior work in soft robotics has primarily focused on either sensing or actuation. This paper presents SOFTcell, a novel controllable stiffness tactile device that incorporates both optical sensing and pneumatic actuation. We report details on the device's design and implementation and analyze results from characterization experiments on sensitivity and performance, which show that SOFTcell can controllably increase its effective modulus from 4.4kPa to 46.1kPa. Additionally, we demonstrate the utility of SOFTcell for grasping in a reactive control task in which tactile data is used to detect fingertip shear as a grasped object slips, and cell pressurization is used to prevent the slip without the need to adjust fingertip position.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"157 1","pages":"6437-6444"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90984183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594349
Mert Kaya, A. Denasi, S. Scheggi, Erdem Agbahca, ChangKyu Yoon, D. Gracias, S. Misra
Minimally invasive surgery can benefit greatly from utilizing micro-agents. These miniaturized agents need to be clearly visualized and precisely controlled to ensure the success of the surgery. Since medical imaging modalities suffer from low acquisition rate, multi-rate sampling methods can be used to estimate the intersample states of micro-agents. Hence, the sampling rate of the controller can be virtually increased even if the position data is acquired using a slow medical imaging modality. This study presents multi-rate Luenberger and Kalman state estimators for visual tracking of micro-agents. The micro-agents are tracked using sum of squared differences and normalized cross correlation based visual tracking. Further, the outputs of the two methods are merged to minimize the tracking error and prevent tracking failures. During the experiments, the micro-agents with different geometrical shapes and sizes are imaged using a 2D ultrasound machine and a microscope, and manipulated using electromagnetic coils. The multi-rate state estimation accuracy is measured using a high speed camera. The precision of the tracking and multi-rate state estimation are verified experimentally under challenging conditions. For this purpose, an elliptical shaped magnetic micro-agent with a length of 48 pixels is used. Maximum absolute error in $x$ and $y$ axes are 2.273 and 2.432 pixels for an 8-fold increase of the sample rate (25 frames per second), respectively. During the experiments, it was observed that the micro-agents could be tracked more reliably using normalized cross correlation based visual tracking and inters ample states could be estimated more accurately using Kalman state estimator. Experimental results show that the proposed method could be used to track micro-agents in medical imaging modalities and estimate system states at intermediate time instants in real-time.
{"title":"A Multi-Rate State Observer for Visual Tracking of Magnetic Micro-Agents Using 2D Slow Medical Imaging Modalities","authors":"Mert Kaya, A. Denasi, S. Scheggi, Erdem Agbahca, ChangKyu Yoon, D. Gracias, S. Misra","doi":"10.1109/IROS.2018.8594349","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594349","url":null,"abstract":"Minimally invasive surgery can benefit greatly from utilizing micro-agents. These miniaturized agents need to be clearly visualized and precisely controlled to ensure the success of the surgery. Since medical imaging modalities suffer from low acquisition rate, multi-rate sampling methods can be used to estimate the intersample states of micro-agents. Hence, the sampling rate of the controller can be virtually increased even if the position data is acquired using a slow medical imaging modality. This study presents multi-rate Luenberger and Kalman state estimators for visual tracking of micro-agents. The micro-agents are tracked using sum of squared differences and normalized cross correlation based visual tracking. Further, the outputs of the two methods are merged to minimize the tracking error and prevent tracking failures. During the experiments, the micro-agents with different geometrical shapes and sizes are imaged using a 2D ultrasound machine and a microscope, and manipulated using electromagnetic coils. The multi-rate state estimation accuracy is measured using a high speed camera. The precision of the tracking and multi-rate state estimation are verified experimentally under challenging conditions. For this purpose, an elliptical shaped magnetic micro-agent with a length of 48 pixels is used. Maximum absolute error in $x$ and $y$ axes are 2.273 and 2.432 pixels for an 8-fold increase of the sample rate (25 frames per second), respectively. During the experiments, it was observed that the micro-agents could be tracked more reliably using normalized cross correlation based visual tracking and inters ample states could be estimated more accurately using Kalman state estimator. Experimental results show that the proposed method could be used to track micro-agents in medical imaging modalities and estimate system states at intermediate time instants in real-time.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91112809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}