Vision recognition using shape context for autonomous underwater sampling

K. McBryan, D. Akin
{"title":"Vision recognition using shape context for autonomous underwater sampling","authors":"K. McBryan, D. Akin","doi":"10.1109/AUV.2012.6380730","DOIUrl":null,"url":null,"abstract":"The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.","PeriodicalId":340133,"journal":{"name":"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE/OES Autonomous Underwater Vehicles (AUV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AUV.2012.6380730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The ocean floor is one of the few remaining unexplored places on the planet. Underwater vehicles, both teleoperated and autonomous, have been built to take images of the ocean floor. The depth that a teleoperated vehicle can achieve is limited by its tether. Autonomous vehicles are able to study the deepest parts of the ocean without a complex tether system. These vehicles, while being great at mapping the ocean floor, are not able to autonomously retrieve samples. In order to retrieve samples the vehicle must: know what objects look like, correctly identify new instances of the target object, estimate the pose so the manipulator can grab it, and retrieve its coordinates in 3D space. Color filtering, shape context and the use of stereovision have been used to autonomously locate, identify, and estimate the pose of objects. Color filtering allows the image to be filtered so that only objects of similar color remain and extraneous information can be disregarded. Shape context matches the shape, as defined by the edge pixels, of each potential target to a known object. Shape context uses a costing function to determine if the potential target is a match to the known object. The costing function takes into account the amount of 'bending energy' it takes to make the shape of the potential target conform to that of the known object. This gives a metric of how well the match is between the potential target and a known object and is done for both the left and right cameras. Once objects have been identified in each image, calibration parameters can be used to retrieve the 3D position of the object. This allows a manipulator on an underwater vehicle to autonomously sample targets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于形状上下文的水下自主采样视觉识别
海底是地球上为数不多未被探索过的地方之一。遥控和自主的水下航行器已经被用来拍摄海底的图像。遥控车辆所能达到的深度受到其系绳的限制。无人驾驶汽车无需复杂的系绳系统就能研究海洋的最深处。这些交通工具虽然在绘制海底地图方面很出色,但却无法自动检索样本。为了检索样本,车辆必须:知道物体的样子,正确识别目标物体的新实例,估计姿态,以便机械手可以抓取它,并检索其在3D空间中的坐标。颜色过滤、形状上下文和立体视觉的使用已被用于自主定位、识别和估计物体的姿态。颜色过滤允许对图像进行过滤,以便只保留相似颜色的物体,并且可以忽略无关信息。形状上下文将每个潜在目标的形状(由边缘像素定义)与已知对象相匹配。形状上下文使用成本计算函数来确定潜在目标是否与已知对象匹配。成本函数考虑了使潜在目标的形状符合已知物体的形状所需的“弯曲能量”的数量。这提供了潜在目标与已知对象之间匹配程度的度量,并为左右摄像机完成了匹配。一旦在每张图像中识别出物体,就可以使用校准参数来检索物体的3D位置。这使得水下航行器上的操纵器能够自主地对目标进行采样。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Real-time side scan image generation and registration framework for AUV route following AUV observations of surface mixing and bubble entrainment in the Clyde estuary, Scotland Improving robustness of terrain-relative navigation for AUVs in regions with flat terrain Real time path planning for a class of torpedo-type AUVs in unknown environment Marine world representation and acoustic communication: Challenges for multi-robot collaboration
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1