Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759644
B. Coltin, Jesse Fusco, Z. Moratto, O. Alexandrov, Robert Nakamura
We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.
{"title":"Localization from visual landmarks on a free-flying robot","authors":"B. Coltin, Jesse Fusco, Z. Moratto, O. Alexandrov, Robert Nakamura","doi":"10.1109/IROS.2016.7759644","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759644","url":null,"abstract":"We present the localization approach for Astrobee, a new free-flying robot designed to navigate autonomously on the International Space Station (ISS). Astrobee will accommodate a variety of payloads and enable guest scientists to run experiments in zero-g, as well as assist astronauts and ground controllers. Astrobee will replace the SPHERES robots which currently operate on the ISS, whose use of fixed ultrasonic beacons for localization limits them to work in a 2 meter cube. Astrobee localizes with monocular vision and an IMU, without any environmental modifications. Visual features detected on a pre-built map, optical flow information, and IMU readings are all integrated into an extended Kalman filter (EKF) to estimate the robot pose. We introduce several modifications to the filter to make it more robust to noise, and extensively evaluate the localization algorithm.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130727386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759121
Keerthy Kusumam, T. Krajník, S. Pearson, Grzegorz Cielniak, T. Duckett
This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.
{"title":"Can you pick a broccoli? 3D-vision based detection and localisation of broccoli heads in the field","authors":"Keerthy Kusumam, T. Krajník, S. Pearson, Grzegorz Cielniak, T. Duckett","doi":"10.1109/IROS.2016.7759121","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759121","url":null,"abstract":"This paper presents a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with 95.2% precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114521058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759157
Giulio Cerruti, D. Chablat, D. Gouaillier, S. Sakka
This paper presents a novel design of a compact and light-weight robotic hand for a social humanoid robot. The proposed system is able to perform common hand gestures and self-adaptable grasps by mixing under-actuated and self-adaptable hand kinematics in a unique design. The hand answers the need for precise finger postures and sensor-less force feedback during gestures and for finger adaptation and autonomous force distribution during grasps. These are provided by a dual actuation system embodied within the palm and the fingers. Coexistence is ensured by compliant transmissions based on elastomer bars rather than classical tension springs, thanks to their high elastic coefficient at reduced sizes and strains. The proposed solution significantly reduces the weight and the size of the hand by using a reduced number of small actuators for gesturing and a single motor for grasping. The hand prototype (ALPHA) is realized to confirm the design feasibility and functional capabilities. It is controlled to provide safe human-robot interaction and preserve mechanical integrity in order to be embodied on a humanoid robot.
{"title":"ALPHA: A hybrid self-adaptable hand for a social humanoid robot","authors":"Giulio Cerruti, D. Chablat, D. Gouaillier, S. Sakka","doi":"10.1109/IROS.2016.7759157","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759157","url":null,"abstract":"This paper presents a novel design of a compact and light-weight robotic hand for a social humanoid robot. The proposed system is able to perform common hand gestures and self-adaptable grasps by mixing under-actuated and self-adaptable hand kinematics in a unique design. The hand answers the need for precise finger postures and sensor-less force feedback during gestures and for finger adaptation and autonomous force distribution during grasps. These are provided by a dual actuation system embodied within the palm and the fingers. Coexistence is ensured by compliant transmissions based on elastomer bars rather than classical tension springs, thanks to their high elastic coefficient at reduced sizes and strains. The proposed solution significantly reduces the weight and the size of the hand by using a reduced number of small actuators for gesturing and a single motor for grasping. The hand prototype (ALPHA) is realized to confirm the design feasibility and functional capabilities. It is controlled to provide safe human-robot interaction and preserve mechanical integrity in order to be embodied on a humanoid robot.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123598952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759103
Ganesh Kumar Hari Shankar Lal Das, B. Tondu, Florent Forget, Jérôme Manhes, O. Stasse, P. Souéres
Pneumatic actuators have inherent compliance and hence they are very interesting for applications involving interaction with environment or human. But controlling such kind of actuators is not trivial. The paper presents an implementation of iterative Linear Quadratic regulator (iLQR) based optimal control framework to control an anthropomorphic arm with each joint actuated by an agonist-antagonistic pair of Mckibben artificial muscles. The method is applied to positioning tasks and generation of explosive movements by maximizing the link speed. It is then compared to traditional control strategies to justify that optimal control is effective in controlling the position in highly non-linear pneumatic systems. Also the importance of varying compliance is highlighted by repeating the tasks at different compliance level. The algorithm validation is reported here by several simulations and hardware experiments in which the shoulder and elbow flexion are controlled simultaneously.
{"title":"Controlling a multi-joint arm actuated by pneumatic muscles with quasi-DDP optimal control","authors":"Ganesh Kumar Hari Shankar Lal Das, B. Tondu, Florent Forget, Jérôme Manhes, O. Stasse, P. Souéres","doi":"10.1109/IROS.2016.7759103","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759103","url":null,"abstract":"Pneumatic actuators have inherent compliance and hence they are very interesting for applications involving interaction with environment or human. But controlling such kind of actuators is not trivial. The paper presents an implementation of iterative Linear Quadratic regulator (iLQR) based optimal control framework to control an anthropomorphic arm with each joint actuated by an agonist-antagonistic pair of Mckibben artificial muscles. The method is applied to positioning tasks and generation of explosive movements by maximizing the link speed. It is then compared to traditional control strategies to justify that optimal control is effective in controlling the position in highly non-linear pneumatic systems. Also the importance of varying compliance is highlighted by repeating the tasks at different compliance level. The algorithm validation is reported here by several simulations and hardware experiments in which the shoulder and elbow flexion are controlled simultaneously.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126570945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-09DOI: 10.1109/IROS.2016.7759716
Céline Craye, David Filliat, Jean-François Goudou
In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms the state-of-the-art EdgeBoxes algorithm. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration.
{"title":"RL-IAC: An exploration policy for online saliency learning on an autonomous mobile robot","authors":"Céline Craye, David Filliat, Jean-François Goudou","doi":"10.1109/IROS.2016.7759716","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759716","url":null,"abstract":"In the context of visual object search and localization, saliency maps provide an efficient way to find object candidates in images. Unlike most approaches, we propose a way to learn saliency maps directly on a robot, by exploring the environment, discovering salient objects using geometric cues, and learning their visual aspects. More importantly, we provide an autonomous exploration strategy able to drive the robot for the task of learning saliency. For that, we describe the Reinforcement Learning-Intelligent Adaptive Curiosity algorithm (RL-IAC), a mechanism based on IAC (Intelligent Adaptive Curiosity) able to guide the robot through areas of the space where learning progress is high, while minimizing the time spent to move in its environment without learning. We demonstrate first that our saliency approach is an efficient tool to generate relevant object boxes proposal in the input image and significantly outperforms the state-of-the-art EdgeBoxes algorithm. Second, we show that RL-IAC can drastically decrease the required time for learning saliency compared to random exploration.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129179685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-04DOI: 10.1109/IROS.2016.7759489
Graeme Best, J. Faigl, R. Fitch
We propose a self-organising map (SOM) algorithm as a solution to a new multi-goal path planning problem for active perception and data collection tasks. We optimise paths for a multi-robot team that aims to maximally observe a set of nodes in the environment. The selected nodes are observed by visiting associated viewpoint regions defined by a sensor model. The key problem characteristics are that the viewpoint regions are overlapping polygonal continuous regions, each node has an observation reward, and the robots are constrained by travel budgets. The SOM algorithm jointly selects and allocates nodes to the robots and finds favourable sequences of sensing locations. The algorithm has polynomial-bounded runtime independent of the number of robots. We demonstrate feasibility for the active perception task of observing a set of 3D objects. The viewpoint regions consider sensing ranges and self-occlusions, and the rewards are measured as discriminability in the ensemble of shape functions feature space. Simulations were performed using a 3D point cloud dataset from a real robot in a large outdoor environment. Our results show the proposed methods enable multi-robot planning for budgeted active perception tasks with continuous sets of candidate viewpoints and long planning horizons.
{"title":"Multi-robot path planning for budgeted active perception with self-organising maps","authors":"Graeme Best, J. Faigl, R. Fitch","doi":"10.1109/IROS.2016.7759489","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759489","url":null,"abstract":"We propose a self-organising map (SOM) algorithm as a solution to a new multi-goal path planning problem for active perception and data collection tasks. We optimise paths for a multi-robot team that aims to maximally observe a set of nodes in the environment. The selected nodes are observed by visiting associated viewpoint regions defined by a sensor model. The key problem characteristics are that the viewpoint regions are overlapping polygonal continuous regions, each node has an observation reward, and the robots are constrained by travel budgets. The SOM algorithm jointly selects and allocates nodes to the robots and finds favourable sequences of sensing locations. The algorithm has polynomial-bounded runtime independent of the number of robots. We demonstrate feasibility for the active perception task of observing a set of 3D objects. The viewpoint regions consider sensing ranges and self-occlusions, and the rewards are measured as discriminability in the ensemble of shape functions feature space. Simulations were performed using a 3D point cloud dataset from a real robot in a large outdoor environment. Our results show the proposed methods enable multi-robot planning for budgeted active perception tasks with continuous sets of candidate viewpoints and long planning horizons.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130501200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759357
Justin Miller, Andres Hasfura, Shih‐Yuan Liu, J. How
Mobility On Demand (MOD) systems are revolutionizing transportation in urban settings by improving vehicle utilization and reducing parking congestion. A key factor in the success of an MOD system is the ability to measure and respond to real-time customer arrival data. Real time traffic arrival rate data is traditionally difficult to obtain due to the need to install fixed sensors throughout the MOD network. This paper presents a framework for measuring pedestrian traffic arrival rates using sensors onboard the vehicles that make up the MOD fleet. A novel distributed fusion algorithm is presented which combines onboard LIDAR and camera sensor measurements to detect trajectories of pedestrians with a 90% detection hit rate with 1.5 false positives per minute. A novel moving observer method is introduced to estimate pedestrian arrival rates from pedestrian trajectories collected from mobile sensors. The moving observer method is evaluated in both simulation and hardware and is shown to achieve arrival rate estimates comparable to those that would be obtained with multiple stationary sensors.
{"title":"Dynamic arrival rate estimation for campus Mobility On Demand network graphs","authors":"Justin Miller, Andres Hasfura, Shih‐Yuan Liu, J. How","doi":"10.1109/IROS.2016.7759357","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759357","url":null,"abstract":"Mobility On Demand (MOD) systems are revolutionizing transportation in urban settings by improving vehicle utilization and reducing parking congestion. A key factor in the success of an MOD system is the ability to measure and respond to real-time customer arrival data. Real time traffic arrival rate data is traditionally difficult to obtain due to the need to install fixed sensors throughout the MOD network. This paper presents a framework for measuring pedestrian traffic arrival rates using sensors onboard the vehicles that make up the MOD fleet. A novel distributed fusion algorithm is presented which combines onboard LIDAR and camera sensor measurements to detect trajectories of pedestrians with a 90% detection hit rate with 1.5 false positives per minute. A novel moving observer method is introduced to estimate pedestrian arrival rates from pedestrian trajectories collected from mobile sensors. The moving observer method is evaluated in both simulation and hardware and is shown to achieve arrival rate estimates comparable to those that would be obtained with multiple stationary sensors.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759705
H. Chiang, N. Rackley, Lydia Tapia
Motion planning in stochastic dynamic uncertain environments is critical in several applications such as human interacting robots, autonomous vehicles and assistive robots. In order to address these complex applications, several methods have been developed. The most successful methods often predict future obstacle locations in order identify collision-free paths. Since prediction can be computationally expensive, offline computations are commonly used, and simplifications such as the inability to consider the dynamics of interacting obstacles or possible stochastic dynamics are often applied. Online methods can be preferable to simulate potential obstacle interactions, but recent methods have been restricted to Gaussian interaction processes and uncertainty. In this paper we present an online motion planning method, Runtime Stochastic Ensemble Simulation (Runtime SES) planning, an inexpensive method for predicting obstacle motion with generic stochastic dynamics while maintaining a high planning success rate despite the potential presence of obstacle position error. Runtime SES planning evaluates the likelihood of collision for any state-time coordinate around the robot by performing Monte Carlo simulations online. This prediction is used to construct a customized Rapidly Exploring Random Tree (RRT) in order to quickly identify paths that avoid obstacles while moving toward a goal. We demonstrate Runtime SES planning in problems that benefit from online predictions, environments with strongly-interacting obstacles with stochastic dynamics and positional error. Through experiments that explore the impact of various parametrizations, robot dynamics and obstacle interaction models, we show that real-time capable planning with a high success rate is achievable in several complex environments.
{"title":"Runtime SES planning: Online motion planning in environments with stochastic dynamics and uncertainty","authors":"H. Chiang, N. Rackley, Lydia Tapia","doi":"10.1109/IROS.2016.7759705","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759705","url":null,"abstract":"Motion planning in stochastic dynamic uncertain environments is critical in several applications such as human interacting robots, autonomous vehicles and assistive robots. In order to address these complex applications, several methods have been developed. The most successful methods often predict future obstacle locations in order identify collision-free paths. Since prediction can be computationally expensive, offline computations are commonly used, and simplifications such as the inability to consider the dynamics of interacting obstacles or possible stochastic dynamics are often applied. Online methods can be preferable to simulate potential obstacle interactions, but recent methods have been restricted to Gaussian interaction processes and uncertainty. In this paper we present an online motion planning method, Runtime Stochastic Ensemble Simulation (Runtime SES) planning, an inexpensive method for predicting obstacle motion with generic stochastic dynamics while maintaining a high planning success rate despite the potential presence of obstacle position error. Runtime SES planning evaluates the likelihood of collision for any state-time coordinate around the robot by performing Monte Carlo simulations online. This prediction is used to construct a customized Rapidly Exploring Random Tree (RRT) in order to quickly identify paths that avoid obstacles while moving toward a goal. We demonstrate Runtime SES planning in problems that benefit from online predictions, environments with strongly-interacting obstacles with stochastic dynamics and positional error. Through experiments that explore the impact of various parametrizations, robot dynamics and obstacle interaction models, we show that real-time capable planning with a high success rate is achievable in several complex environments.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"774 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115642669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759567
F. Reyes, Shugen Ma
Robots capable of both locomotion and interaction with the environment are necessary for robots to move from ideal laboratory situations to real applications. Snake robots have been researched for locomotion in unstructured environments due to its unique and adaptable gaits. However, they have not been used to interact with the environment in a dexterous manner, for example to grasp or push an object. In this paper, the model of a snake robot in contact with external objects (or the environment) is derived. Metrics that are coordinate-independent are derived in order to quantify the wrench that a snake robot could exert into these objects. In particular, we show that the configuration of the robot plays a significant role in these metrics. The model and metrics are tested in a study case.
{"title":"Snake robots in contact with the environment: Influence of the configuration on the applied wrench","authors":"F. Reyes, Shugen Ma","doi":"10.1109/IROS.2016.7759567","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759567","url":null,"abstract":"Robots capable of both locomotion and interaction with the environment are necessary for robots to move from ideal laboratory situations to real applications. Snake robots have been researched for locomotion in unstructured environments due to its unique and adaptable gaits. However, they have not been used to interact with the environment in a dexterous manner, for example to grasp or push an object. In this paper, the model of a snake robot in contact with external objects (or the environment) is derived. Metrics that are coordinate-independent are derived in order to quantify the wrench that a snake robot could exert into these objects. In particular, we show that the configuration of the robot plays a significant role in these metrics. The model and metrics are tested in a study case.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123079680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-10-01DOI: 10.1109/IROS.2016.7759428
L. Tai, Shaohua Li, Ming Liu
Obstacle avoidance is the core problem for mobile robots. Its objective is to allow mobile robots to explore an unknown environment without colliding into other objects. It is the basis for various tasks, e.g. surveillance and rescue, etc. Previous approaches mainly focused on geometric models (such as constructing local cost-maps) which could be regarded as low-level intelligence without any cognitive process. Recently, deep learning has made great breakthroughs in computer vision, especially for recognition and cognitive tasks. It takes advantage of the hierarchical models inspired by human brain structures. However, it is a fact that deep learning, up till now, has seldom been used for controlling and decision making. Inspired by the advantages of deep learning, we take indoor obstacle avoidance as example to show the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process. It is a highly compact network structure that takes raw depth images as input, and generates control commands as network output, by which a model-less obstacle avoidance behavior is achieved. We test our approach in real-world indoor environments. The new findings and results are reported at the end of the paper.
{"title":"A deep-network solution towards model-less obstacle avoidance","authors":"L. Tai, Shaohua Li, Ming Liu","doi":"10.1109/IROS.2016.7759428","DOIUrl":"https://doi.org/10.1109/IROS.2016.7759428","url":null,"abstract":"Obstacle avoidance is the core problem for mobile robots. Its objective is to allow mobile robots to explore an unknown environment without colliding into other objects. It is the basis for various tasks, e.g. surveillance and rescue, etc. Previous approaches mainly focused on geometric models (such as constructing local cost-maps) which could be regarded as low-level intelligence without any cognitive process. Recently, deep learning has made great breakthroughs in computer vision, especially for recognition and cognitive tasks. It takes advantage of the hierarchical models inspired by human brain structures. However, it is a fact that deep learning, up till now, has seldom been used for controlling and decision making. Inspired by the advantages of deep learning, we take indoor obstacle avoidance as example to show the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process. It is a highly compact network structure that takes raw depth images as input, and generates control commands as network output, by which a model-less obstacle avoidance behavior is achieved. We test our approach in real-world indoor environments. The new findings and results are reported at the end of the paper.","PeriodicalId":296337,"journal":{"name":"2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123093133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}