The objective of this paper is to develop a cubature Multi-State Constraint Kalman Filter (MSCKF) for a VisualInertial Navigation System (VINS). MSCKF is a tightly-coupled EKF-based filter operating over a sliding window of multiple sequent states. In order to decrease the complexity and the computational cost of the original EKF-based measurement, the measurement model is built on the Trifocal Tensor Geometry (TTG). The predicted measurement does not need to reconstruct the 3D position of the visual landmarks. In order to employ that nonlinear TTG-based measurement model, this paper will implement cubature approach (i.e. popularly associated with Cubature Kalman Filter (CKF)). Compared to other advanced nonlinear filter, specifically Unscented Kalman Filter (UKF), the CKF has removed the positive-definite condition of the covariance matrix computation, which may halt or fail the filter operation. The proposed filter is validated with three KITTI datasets [1] of residential area to evaluate its performance.
{"title":"Developing a Cubature Multi-state Constraint Kalman Filter for Visual-Inertial Navigation System","authors":"Trung Nguyen, G. Mann, A. Vardy, R. Gosine","doi":"10.1109/CRV.2017.19","DOIUrl":"https://doi.org/10.1109/CRV.2017.19","url":null,"abstract":"The objective of this paper is to develop a cubature Multi-State Constraint Kalman Filter (MSCKF) for a VisualInertial Navigation System (VINS). MSCKF is a tightly-coupled EKF-based filter operating over a sliding window of multiple sequent states. In order to decrease the complexity and the computational cost of the original EKF-based measurement, the measurement model is built on the Trifocal Tensor Geometry (TTG). The predicted measurement does not need to reconstruct the 3D position of the visual landmarks. In order to employ that nonlinear TTG-based measurement model, this paper will implement cubature approach (i.e. popularly associated with Cubature Kalman Filter (CKF)). Compared to other advanced nonlinear filter, specifically Unscented Kalman Filter (UKF), the CKF has removed the positive-definite condition of the covariance matrix computation, which may halt or fail the filter operation. The proposed filter is validated with three KITTI datasets [1] of residential area to evaluate its performance.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129352101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given the importance of studying bifurcations in 3D anatomical trees (e.g. vasculature and airway), we propose a bifurcation detector that operates by fitting a parametric geometric deformable model to 3D medical images. A fitness function is designed to integrate features along the model skeletons, surfaces and internal areas. To overcome local optima while detecting multiple bifurcations in a single image, we adopt genetic algorithm with a tribes niching technique. Results on both VascuSynth data and clinical CT data demonstrate not only high bifurcation detection accuracy and stability, but the ability to locate parent and children branch directions and vessel wall locations simultaneously.
{"title":"Bifurcation Localization in 3D Images via Evolutionary Geometric Deformable Templates","authors":"Mengliu Zhao, G. Hamarneh","doi":"10.1109/CRV.2017.12","DOIUrl":"https://doi.org/10.1109/CRV.2017.12","url":null,"abstract":"Given the importance of studying bifurcations in 3D anatomical trees (e.g. vasculature and airway), we propose a bifurcation detector that operates by fitting a parametric geometric deformable model to 3D medical images. A fitness function is designed to integrate features along the model skeletons, surfaces and internal areas. To overcome local optima while detecting multiple bifurcations in a single image, we adopt genetic algorithm with a tribes niching technique. Results on both VascuSynth data and clinical CT data demonstrate not only high bifurcation detection accuracy and stability, but the ability to locate parent and children branch directions and vessel wall locations simultaneously.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134635485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of indexing and querying very large databases of binary vectors. Such databases of binary vectors are a common occurrence in domains such as information retrieval and computer vision. We propose an indexing structure consisting of a compressed bitwise trie and a hash table for supporting range queries in Hamming space. The index structure, which can be updated incrementally, is able to solve the range queries for any radius. Our approach significantly outperforms state-of-the-art approaches.
{"title":"An Index Structure for Fast Range Search in Hamming Space","authors":"E. M. Reina, K. Pu, F. Qureshi","doi":"10.1109/CRV.2017.37","DOIUrl":"https://doi.org/10.1109/CRV.2017.37","url":null,"abstract":"This paper addresses the problem of indexing and querying very large databases of binary vectors. Such databases of binary vectors are a common occurrence in domains such as information retrieval and computer vision. We propose an indexing structure consisting of a compressed bitwise trie and a hash table for supporting range queries in Hamming space. The index structure, which can be updated incrementally, is able to solve the range queries for any radius. Our approach significantly outperforms state-of-the-art approaches.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115304876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Object grasping is an important ability for carrying out complex manipulation tasks with autonomous robotic systems. The grasp localization module plays an essential role in the success of the grasp maneuver. Generally viewed as a vision perception problem, its goal is determining regions of high graspability by interpreting light and depth information. Over the past few years, several works in Deep Learning (DL) have shown the high potential of Convolutional Neural Networks (CNNs) for solving vision-related problems. Advances in residual networks have further facilitated neural network training by improving convergence time and generalization performances with identity skip connections and residual mappings. In this paper, we investigate the use of residual networks for grasp localization. A standard residual CNN for object recognition uses a global average pooling layer prior to the fully-connected layers. Our experiments have shown that this pooling layer removes the spatial correlation in the back-propagated error signal, and this prevents the network from correctly localizing good grasp regions. We propose an architecture modification that removes this limitation. Our experiments on the Cornell task have shown that our network obtained state-of-the-art performances of 10.85% and 11.86% rectangle metric error on image-wise and object-wise splits respectively. We did not use pre-training but rather opted for on-line data augmentation for managing overfitting. In comparison to previous approach that employed off-line data augmentation, our network used 15x fewer observations, which significantly reduced training time.
{"title":"Convolutional Residual Network for Grasp Localization","authors":"Ludovic Trottier, P. Giguère, B. Chaib-draa","doi":"10.1109/CRV.2017.14","DOIUrl":"https://doi.org/10.1109/CRV.2017.14","url":null,"abstract":"Object grasping is an important ability for carrying out complex manipulation tasks with autonomous robotic systems. The grasp localization module plays an essential role in the success of the grasp maneuver. Generally viewed as a vision perception problem, its goal is determining regions of high graspability by interpreting light and depth information. Over the past few years, several works in Deep Learning (DL) have shown the high potential of Convolutional Neural Networks (CNNs) for solving vision-related problems. Advances in residual networks have further facilitated neural network training by improving convergence time and generalization performances with identity skip connections and residual mappings. In this paper, we investigate the use of residual networks for grasp localization. A standard residual CNN for object recognition uses a global average pooling layer prior to the fully-connected layers. Our experiments have shown that this pooling layer removes the spatial correlation in the back-propagated error signal, and this prevents the network from correctly localizing good grasp regions. We propose an architecture modification that removes this limitation. Our experiments on the Cornell task have shown that our network obtained state-of-the-art performances of 10.85% and 11.86% rectangle metric error on image-wise and object-wise splits respectively. We did not use pre-training but rather opted for on-line data augmentation for managing overfitting. In comparison to previous approach that employed off-line data augmentation, our network used 15x fewer observations, which significantly reduced training time.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133659191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm to estimate depth of realworld scenes containing an object semi-submerged in water using a light field camera. Existing hand-held consumer light field cameras are well-suited for automated refocusing, depth detection in out-door environment. However, when it comes to surveying marine environment and near water macro photography, all depth estimation algorithms based on traditional perspective camera model will fail because of the refracted rays. In this paper, we present a new method that explicitly accommodates the effect of refraction and resolves correct depths of underwater scene points. A semi-submerged object with opaque Lambertian surface with repeating textures is assumed. After removing the effect of refraction, the reconstructed underwater part of the semi-submerged object has consistent depth and shape with that of the above-water part.
{"title":"Depth Estimation of Semi-submerged Objects Using a Light-Field Camera","authors":"Juehui Fan, Herbert Yang","doi":"10.1109/CRV.2017.44","DOIUrl":"https://doi.org/10.1109/CRV.2017.44","url":null,"abstract":"We present an algorithm to estimate depth of realworld scenes containing an object semi-submerged in water using a light field camera. Existing hand-held consumer light field cameras are well-suited for automated refocusing, depth detection in out-door environment. However, when it comes to surveying marine environment and near water macro photography, all depth estimation algorithms based on traditional perspective camera model will fail because of the refracted rays. In this paper, we present a new method that explicitly accommodates the effect of refraction and resolves correct depths of underwater scene points. A semi-submerged object with opaque Lambertian surface with repeating textures is assumed. After removing the effect of refraction, the reconstructed underwater part of the semi-submerged object has consistent depth and shape with that of the above-water part.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128455812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new method for efficiently computing large-displacement optical flow. The method uses dominant motion patterns to identify a sparse set of sub-volumes within the cost volume and restricts subsequent Edge-Aware Filtering (EAF) to these sub-volumes. The method uses an extension of PatchMatch to filter these sub-volumes. The fact that our method only applies EAF to a small fraction of the entire cost volume boosts runtime performance. We also show that computational complexity is linear in the size of the images and does not depend upon the size of the label space. We evaluate the proposed technique on MPI Sintel, Middlebury and KITTI benchmarks and show that our method achieves accuracy comparable to those of several recent state-of-the-art methods, while posting significantly faster runtimes.
{"title":"Fast Estimation of Large Displacement Optical Flow Using Dominant Motion Patterns & Sub-Volume PatchMatch Filtering","authors":"M. Helala, F. Qureshi","doi":"10.1109/CRV.2017.40","DOIUrl":"https://doi.org/10.1109/CRV.2017.40","url":null,"abstract":"This paper presents a new method for efficiently computing large-displacement optical flow. The method uses dominant motion patterns to identify a sparse set of sub-volumes within the cost volume and restricts subsequent Edge-Aware Filtering (EAF) to these sub-volumes. The method uses an extension of PatchMatch to filter these sub-volumes. The fact that our method only applies EAF to a small fraction of the entire cost volume boosts runtime performance. We also show that computational complexity is linear in the size of the images and does not depend upon the size of the label space. We evaluate the proposed technique on MPI Sintel, Middlebury and KITTI benchmarks and show that our method achieves accuracy comparable to those of several recent state-of-the-art methods, while posting significantly faster runtimes.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130923687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Road quality assessment is a crucial part in municipalities' work to maintain their infrastructure, plan upgrades, and manage their budgets. Properly maintaining this infrastructure relies heavily on consistently monitoring its condition and deterioration over time. This can be a challenge, especially in larger towns and cities where there is a lot of city property to keep an eye on. We review road quality assessment methods currently employed, and then describe our novel system, which integrates a collection of existing algorithms, aimed at identifying distressed road regions from street view images and pinpointing cracks within them. We predict distressed regions by computing Fisher vectors on local SIFT descriptors and classifying them with an SVM trained to distinguish between road qualities. We follow this step with a comparison to a weighed contour map within these distressed regions to identify exact crack and defect locations, and use the contour weights to predict the crack severity. Promising results are obtained on our manually annotated dataset, which indicate the viability of using this cost-effective system to perform road quality assessment at the municipal level.
{"title":"Fully Automated Road Defect Detection Using Street View Images","authors":"David Abou Chacra, J. Zelek","doi":"10.1109/CRV.2017.50","DOIUrl":"https://doi.org/10.1109/CRV.2017.50","url":null,"abstract":"Road quality assessment is a crucial part in municipalities' work to maintain their infrastructure, plan upgrades, and manage their budgets. Properly maintaining this infrastructure relies heavily on consistently monitoring its condition and deterioration over time. This can be a challenge, especially in larger towns and cities where there is a lot of city property to keep an eye on. We review road quality assessment methods currently employed, and then describe our novel system, which integrates a collection of existing algorithms, aimed at identifying distressed road regions from street view images and pinpointing cracks within them. We predict distressed regions by computing Fisher vectors on local SIFT descriptors and classifying them with an SVM trained to distinguish between road qualities. We follow this step with a comparison to a weighed contour map within these distressed regions to identify exact crack and defect locations, and use the contour weights to predict the crack severity. Promising results are obtained on our manually annotated dataset, which indicate the viability of using this cost-effective system to perform road quality assessment at the municipal level.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125173817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present an automated algorithm for segmenting human leg muscles from 3D MRI data using deep convolutional neural network (CNN). Using a generalized cylinder model the human leg muscle can be represented by two smooth 2D parametric images representing the contour of the muscle in the MRI image. The proposed CNN algorithm can predict these two parametrized images from raw 3D voxels. We use a pre-trained AlexNet as our baseline and further fine-tune the network that is suitable for this problem. In this scheme, AlexNet predicts a compressed vector obtained by applying principal component analysis, which is then back-projected into two parametric 2D images representing the leg muscle contours. We show that the proposed CNN with a structured regression model can out-perform conventional model-based segmentation approach such as the Active Appearance Model (AAM). The average Dice score between the ground truth segmentation and the obtained segmentation image is 0.87 using the proposed CNN model, whereas for AAM score is 0.68. One of the greatest advantages of our proposed method is that no initialization is needed to predict the segmentation contour, unlike AAM.
在本文中,我们提出了一种使用深度卷积神经网络(CNN)从3D MRI数据中分割人体腿部肌肉的自动算法。利用广义圆柱体模型,人体腿部肌肉可以用两个光滑的二维参数图像来表示MRI图像中肌肉的轮廓。本文提出的CNN算法可以从原始三维体素中预测这两种参数化图像。我们使用预训练的AlexNet作为基线,并进一步微调适合此问题的网络。在该方案中,AlexNet通过应用主成分分析预测压缩向量,然后将其反投影到代表腿部肌肉轮廓的两个参数2D图像中。我们表明,采用结构化回归模型的CNN可以优于传统的基于模型的分割方法,如活动外观模型(AAM)。使用本文提出的CNN模型,ground truth segmentation与得到的分割图像之间的平均Dice得分为0.87,而AAM得分为0.68。我们提出的方法最大的优点之一是不需要初始化来预测分割轮廓,这与AAM不同。
{"title":"A Structured Deep-Learning Based Approach for the Automated Segmentation of Human Leg Muscle from 3D MRI","authors":"Shrimanti Ghosh, Nilanjan Ray, P. Boulanger","doi":"10.1109/CRV.2017.32","DOIUrl":"https://doi.org/10.1109/CRV.2017.32","url":null,"abstract":"In this paper, we present an automated algorithm for segmenting human leg muscles from 3D MRI data using deep convolutional neural network (CNN). Using a generalized cylinder model the human leg muscle can be represented by two smooth 2D parametric images representing the contour of the muscle in the MRI image. The proposed CNN algorithm can predict these two parametrized images from raw 3D voxels. We use a pre-trained AlexNet as our baseline and further fine-tune the network that is suitable for this problem. In this scheme, AlexNet predicts a compressed vector obtained by applying principal component analysis, which is then back-projected into two parametric 2D images representing the leg muscle contours. We show that the proposed CNN with a structured regression model can out-perform conventional model-based segmentation approach such as the Active Appearance Model (AAM). The average Dice score between the ground truth segmentation and the obtained segmentation image is 0.87 using the proposed CNN model, whereas for AAM score is 0.68. One of the greatest advantages of our proposed method is that no initialization is needed to predict the segmentation contour, unlike AAM.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114502657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit P. Desai, Lourdes Peña Castillo, Oscar E. Meruvia Pastor
A major drawback of most Head Mounted Displays (HMDs) used in immersive Virtual Reality (VR) is the visual and social isolation of users from their real-world surroundings while wearing these headsets. This partial isolation of users from the real-world might hinder social interactions with friends and family. To address this issue, we present a new method to allow people wearing VR HMDs to use their smartphones or tablets without removing their HMDs. To do this, we augment the scene inside the VR HMD with a view of the user's device so that the user can interact with the device without removing the headset. The idea involves the use of additional cameras, such as the Leap Motion device or a high-resolution RGB camera to capture the user's real-world surrounding and augment the virtual world with the content displayed on the smartphone screen. This setup allows VR users to have a window to their smartphone from within the virtual world and afford all of the functionality provided by their smartphones, with the potential to reduce some of the undesirable isolation users may experience when using immersive VR systems.
{"title":"A Window to Your Smartphone: Exploring Interaction and Communication in Immersive VR with Augmented Virtuality","authors":"Amit P. Desai, Lourdes Peña Castillo, Oscar E. Meruvia Pastor","doi":"10.1109/CRV.2017.16","DOIUrl":"https://doi.org/10.1109/CRV.2017.16","url":null,"abstract":"A major drawback of most Head Mounted Displays (HMDs) used in immersive Virtual Reality (VR) is the visual and social isolation of users from their real-world surroundings while wearing these headsets. This partial isolation of users from the real-world might hinder social interactions with friends and family. To address this issue, we present a new method to allow people wearing VR HMDs to use their smartphones or tablets without removing their HMDs. To do this, we augment the scene inside the VR HMD with a view of the user's device so that the user can interact with the device without removing the headset. The idea involves the use of additional cameras, such as the Leap Motion device or a high-resolution RGB camera to capture the user's real-world surrounding and augment the virtual world with the content displayed on the smartphone screen. This setup allows VR users to have a window to their smartphone from within the virtual world and afford all of the functionality provided by their smartphones, with the potential to reduce some of the undesirable isolation users may experience when using immersive VR systems.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131647276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a modification of a well-known ant-inspired trail-following algorithm to reduce congestion in multi-robot systems. Our method results in robots moving in multiple lanes towards their goal location. Our algorithm is inspired by the idea of building multiple-lane highways to mitigate traffic congestion in traffic engineering. We consider the resource transportation task where autonomous robots repeatedly transport goods between a food source and a nest in an initially unknown environment. To evaluate our algorithm, we perform simulation experiments in several environments with and without obstacles. Compared with the baseline SO-LOST algorithm, we find that our modified method increases the system throughput by up to 3.9 times by supporting a larger productive robot population.
{"title":"LOST Highway: A Multiple-Lane Ant-Trail Algorithm to Reduce Congestion in Large-Population Multi-robot Systems","authors":"A. Abdelaal, Maram Sakr, R. Vaughan","doi":"10.1109/CRV.2017.24","DOIUrl":"https://doi.org/10.1109/CRV.2017.24","url":null,"abstract":"We propose a modification of a well-known ant-inspired trail-following algorithm to reduce congestion in multi-robot systems. Our method results in robots moving in multiple lanes towards their goal location. Our algorithm is inspired by the idea of building multiple-lane highways to mitigate traffic congestion in traffic engineering. We consider the resource transportation task where autonomous robots repeatedly transport goods between a food source and a nest in an initially unknown environment. To evaluate our algorithm, we perform simulation experiments in several environments with and without obstacles. Compared with the baseline SO-LOST algorithm, we find that our modified method increases the system throughput by up to 3.9 times by supporting a larger productive robot population.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134365785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}