The study of combined task and motion planning has mostly been concerned with feasibility planning for high-dimensional, complex manipulation problems. Instead this paper gives its attention to optimal planning for low-dimensional planning problems and introduces the dynamic, anytime task and path planner for mobile robots. The proposed approach adopts a multi-tree extension of the T-RRT* algorithm in the path planning layer and further introduces dynamic and anytime planning components to enable low-level path correction and high-level re-planning capabilities when operating in dynamic or partially-known environments. Evaluation of the planner against existing methods show cost reductions of solution plans while remaining computationally efficient, and simulated deployment of the planner validates the effectiveness of the dynamic, anytime behavior of the proposed approach.
{"title":"Dynamic, Anytime Task and Path Planning for Mobile Robots","authors":"Cuebong Wong, Erfu Yang, Xiu T. Yan, Dongbing Gu","doi":"10.31256/UKRAS19.10","DOIUrl":"https://doi.org/10.31256/UKRAS19.10","url":null,"abstract":"The study of combined task and motion planning has mostly been concerned with feasibility planning for high-dimensional, complex manipulation problems. Instead this paper gives its attention to optimal planning for low-dimensional planning problems and introduces the dynamic, anytime task and path planner for mobile robots. The proposed approach adopts a multi-tree extension of the T-RRT* algorithm in the path planning layer and further introduces dynamic and anytime planning components to enable low-level path correction and high-level re-planning capabilities when operating in dynamic or partially-known environments. Evaluation of the planner against existing methods show cost reductions of solution plans while remaining computationally efficient, and simulated deployment of the planner validates the effectiveness of the dynamic, anytime behavior of the proposed approach.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123238559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiangtao Wang, Yang Zhou, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani
To simulate the underwater environment and test algorithms for autonomous underwater vehicles, we developed an underwater simulation environment with the Unreal Engine 4 to generate underwater visual data such as seagrass and landscape. We then used such data from the Unreal environment to train and verify an underwater image segmentation model, which is an important technology to later achieve visual based navigation. The simulation environment shows the potentials for dataset generalization and testing robot vision algorithms.
{"title":"Can underwater environment simulation contribute to vision tasks for autonomous systems?","authors":"Jiangtao Wang, Yang Zhou, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani","doi":"10.31256/UKRAS19.26","DOIUrl":"https://doi.org/10.31256/UKRAS19.26","url":null,"abstract":"To simulate the underwater environment and test\u0000algorithms for autonomous underwater vehicles, we developed\u0000an underwater simulation environment with the Unreal Engine 4\u0000to generate underwater visual data such as seagrass and\u0000landscape. We then used such data from the Unreal environment\u0000to train and verify an underwater image segmentation model,\u0000which is an important technology to later achieve visual based\u0000navigation. The simulation environment shows the potentials for\u0000dataset generalization and testing robot vision algorithms.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132745065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Zhou, Jiangtao Wang, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani
A deep neural network architecture is proposed in this paper for underwater scene semantic segmentation. The architecture consists of encoder and decoder networks. Pretrained VGG-16 network is used as a feature extractor, while the decoder learns to expand the lower resolution feature maps. The network applies max un-pooling operator to avoid large number of learnable parameters, and, in order to make use of the feature maps in encoder network, it concatenates the feature maps with decoder and encoder for lower resolution feature maps. Our architecture shows capabilities of faster convergence and better accuracy. To get a clear view of underwater scene, an underwater enhancement neural network architecture is described in this paper and applied for training. It speeds up the training process and convergence rate in training.
{"title":"Underwater Scene Segmentation by Deep Neural Network","authors":"Yang Zhou, Jiangtao Wang, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani","doi":"10.31256/UKRAS19.12","DOIUrl":"https://doi.org/10.31256/UKRAS19.12","url":null,"abstract":"A deep neural network architecture is proposed in\u0000this paper for underwater scene semantic segmentation. The\u0000architecture consists of encoder and decoder networks. Pretrained VGG-16 network is used as a feature extractor, while the\u0000decoder learns to expand the lower resolution feature maps. The\u0000network applies max un-pooling operator to avoid large number\u0000of learnable parameters, and, in order to make use of the feature\u0000maps in encoder network, it concatenates the feature maps with\u0000decoder and encoder for lower resolution feature maps. Our\u0000architecture shows capabilities of faster convergence and better\u0000accuracy. To get a clear view of underwater scene, an underwater\u0000enhancement neural network architecture is described in this\u0000paper and applied for training. It speeds up the training process\u0000and convergence rate in training.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125175659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}