{"title":"Sensor fusion of range and intensity data for subsea robotics","authors":"M. Chantler, C. Reid","doi":"10.1109/ICAR.1991.240373","DOIUrl":null,"url":null,"abstract":"The research reported is the product of one of a group of projects whose joint aim is to automate simple subsea manipulation tasks. The conditions typically encountered during underwater operations are particularly challenging for robotic automation, and in particular, pose real problems for computer vision. The authors feel that intensity data from underwater video cameras on their own are unlikely to supply sufficient information for reliable 3D scene analysis. They propose the use of two complementary sensors; a high frequency linear sonar array and an underwater video camera. A brief discussion of the calibration and registration of the sensor system is followed by a description of the sensor fusion technique which uses transformed sonar data to supervise the automatic segmentation of difficult video images. Results illustrate the effectiveness of the technique; by showing video segmentations with and without the use of the supervisory sonar data.<<ETX>>","PeriodicalId":356333,"journal":{"name":"Fifth International Conference on Advanced Robotics 'Robots in Unstructured Environments","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fifth International Conference on Advanced Robotics 'Robots in Unstructured Environments","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.1991.240373","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The research reported is the product of one of a group of projects whose joint aim is to automate simple subsea manipulation tasks. The conditions typically encountered during underwater operations are particularly challenging for robotic automation, and in particular, pose real problems for computer vision. The authors feel that intensity data from underwater video cameras on their own are unlikely to supply sufficient information for reliable 3D scene analysis. They propose the use of two complementary sensors; a high frequency linear sonar array and an underwater video camera. A brief discussion of the calibration and registration of the sensor system is followed by a description of the sensor fusion technique which uses transformed sonar data to supervise the automatic segmentation of difficult video images. Results illustrate the effectiveness of the technique; by showing video segmentations with and without the use of the supervisory sonar data.<>