首页 > 最新文献

UK-RAS19 Conference: "Embedded Intelligence: Enabling and Supporting RAS Technologies" Proceedings最新文献

英文 中文
Enabling functional resilience in autonomous multi-arm and multi- vehicle cooperative tasks 在自主多臂和多车辆协同任务中实现功能弹性
A. Behera
{"title":"Enabling functional resilience in autonomous multi-arm and multi- vehicle cooperative tasks","authors":"A. Behera","doi":"10.31256/UKRAS19.8","DOIUrl":"https://doi.org/10.31256/UKRAS19.8","url":null,"abstract":"","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132472179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic, Anytime Task and Path Planning for Mobile Robots 移动机器人的动态、随时任务和路径规划
Cuebong Wong, Erfu Yang, Xiu T. Yan, Dongbing Gu
The study of combined task and motion planning has mostly been concerned with feasibility planning for high-dimensional, complex manipulation problems. Instead this paper gives its attention to optimal planning for low-dimensional planning problems and introduces the dynamic, anytime task and path planner for mobile robots. The proposed approach adopts a multi-tree extension of the T-RRT* algorithm in the path planning layer and further introduces dynamic and anytime planning components to enable low-level path correction and high-level re-planning capabilities when operating in dynamic or partially-known environments. Evaluation of the planner against existing methods show cost reductions of solution plans while remaining computationally efficient, and simulated deployment of the planner validates the effectiveness of the dynamic, anytime behavior of the proposed approach.
任务与运动联合规划的研究主要涉及高维复杂操作问题的可行性规划。相反,本文关注低维规划问题的最优规划,并介绍了移动机器人的动态、随时任务和路径规划器。该方法在路径规划层采用T-RRT*算法的多树扩展,并进一步引入动态和随时规划组件,实现在动态或部分已知环境下运行时的低级路径校正和高级重新规划能力。根据现有方法对规划器进行的评估表明,在保持计算效率的同时,解决方案计划的成本降低了,并且规划器的模拟部署验证了所提议方法的动态、随时行为的有效性。
{"title":"Dynamic, Anytime Task and Path Planning for Mobile Robots","authors":"Cuebong Wong, Erfu Yang, Xiu T. Yan, Dongbing Gu","doi":"10.31256/UKRAS19.10","DOIUrl":"https://doi.org/10.31256/UKRAS19.10","url":null,"abstract":"The study of combined task and motion planning has mostly been concerned with feasibility planning for high-dimensional, complex manipulation problems. Instead this paper gives its attention to optimal planning for low-dimensional planning problems and introduces the dynamic, anytime task and path planner for mobile robots. The proposed approach adopts a multi-tree extension of the T-RRT* algorithm in the path planning layer and further introduces dynamic and anytime planning components to enable low-level path correction and high-level re-planning capabilities when operating in dynamic or partially-known environments. Evaluation of the planner against existing methods show cost reductions of solution plans while remaining computationally efficient, and simulated deployment of the planner validates the effectiveness of the dynamic, anytime behavior of the proposed approach.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123238559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Can underwater environment simulation contribute to vision tasks for autonomous systems? 水下环境模拟是否有助于自主系统的视觉任务?
Jiangtao Wang, Yang Zhou, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani
To simulate the underwater environment and testalgorithms for autonomous underwater vehicles, we developedan underwater simulation environment with the Unreal Engine 4to generate underwater visual data such as seagrass andlandscape. We then used such data from the Unreal environmentto train and verify an underwater image segmentation model,which is an important technology to later achieve visual basednavigation. The simulation environment shows the potentials fordataset generalization and testing robot vision algorithms.
为了模拟水下环境和测试自主水下航行器的算法,我们利用虚幻引擎4开发了一个水下模拟环境,以生成海草和景观等水下视觉数据。然后,我们使用来自虚幻环境的这些数据来训练和验证水下图像分割模型,这是后来实现基于视觉的导航的重要技术。仿真环境显示了数据集泛化和测试机器人视觉算法的潜力。
{"title":"Can underwater environment simulation contribute to vision tasks for autonomous systems?","authors":"Jiangtao Wang, Yang Zhou, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani","doi":"10.31256/UKRAS19.26","DOIUrl":"https://doi.org/10.31256/UKRAS19.26","url":null,"abstract":"To simulate the underwater environment and test\u0000algorithms for autonomous underwater vehicles, we developed\u0000an underwater simulation environment with the Unreal Engine 4\u0000to generate underwater visual data such as seagrass and\u0000landscape. We then used such data from the Unreal environment\u0000to train and verify an underwater image segmentation model,\u0000which is an important technology to later achieve visual based\u0000navigation. The simulation environment shows the potentials for\u0000dataset generalization and testing robot vision algorithms.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132745065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Underwater Scene Segmentation by Deep Neural Network 基于深度神经网络的水下场景分割
Yang Zhou, Jiangtao Wang, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani
A deep neural network architecture is proposed inthis paper for underwater scene semantic segmentation. Thearchitecture consists of encoder and decoder networks. Pretrained VGG-16 network is used as a feature extractor, while thedecoder learns to expand the lower resolution feature maps. Thenetwork applies max un-pooling operator to avoid large numberof learnable parameters, and, in order to make use of the featuremaps in encoder network, it concatenates the feature maps withdecoder and encoder for lower resolution feature maps. Ourarchitecture shows capabilities of faster convergence and betteraccuracy. To get a clear view of underwater scene, an underwaterenhancement neural network architecture is described in thispaper and applied for training. It speeds up the training processand convergence rate in training.
本文提出了一种用于水下场景语义分割的深度神经网络结构。该架构由编码器和解码器网络组成。使用预训练的VGG-16网络作为特征提取器,而解码器学习扩展低分辨率特征映射。该网络采用最大解池算子,避免了大量可学习参数的产生,并且为了充分利用编码器网络中的特征映射,将特征映射与解码器和编码器连接起来,以获得较低分辨率的特征映射。我们的架构显示出更快的收敛和更好的精度。为了获得清晰的水下场景视图,本文描述了一种水下增强神经网络架构,并将其应用于训练。它加快了训练过程和训练收敛速度。
{"title":"Underwater Scene Segmentation by Deep Neural Network","authors":"Yang Zhou, Jiangtao Wang, Baihua Li, Q. Meng, Emanuele Rocco, Andrea Saiani","doi":"10.31256/UKRAS19.12","DOIUrl":"https://doi.org/10.31256/UKRAS19.12","url":null,"abstract":"A deep neural network architecture is proposed in\u0000this paper for underwater scene semantic segmentation. The\u0000architecture consists of encoder and decoder networks. Pretrained VGG-16 network is used as a feature extractor, while the\u0000decoder learns to expand the lower resolution feature maps. The\u0000network applies max un-pooling operator to avoid large number\u0000of learnable parameters, and, in order to make use of the feature\u0000maps in encoder network, it concatenates the feature maps with\u0000decoder and encoder for lower resolution feature maps. Our\u0000architecture shows capabilities of faster convergence and better\u0000accuracy. To get a clear view of underwater scene, an underwater\u0000enhancement neural network architecture is described in this\u0000paper and applied for training. It speeds up the training process\u0000and convergence rate in training.","PeriodicalId":424229,"journal":{"name":"UK-RAS19 Conference: \"Embedded Intelligence: Enabling and Supporting RAS Technologies\" Proceedings","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125175659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
UK-RAS19 Conference: "Embedded Intelligence: Enabling and Supporting RAS Technologies" Proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1