S. Mohan, K. Simonsen, I. Balslev, V. Krüger, R. D. Eriksen
{"title":"3D scanning of object surfaces using structured light and a single camera image","authors":"S. Mohan, K. Simonsen, I. Balslev, V. Krüger, R. D. Eriksen","doi":"10.1109/CASE.2011.6042450","DOIUrl":null,"url":null,"abstract":"We present a novel low-cost device for scanning object surfaces based on structured LED (Light Emitting Diode) light and a single, monocular image from a standard machine vision camera. During the calibration phase we find the displacement of the imaged LED-projected marker points as a function of depth. Using this displacement data and a suitable interpolation technique, the depth of surface points can be derived from a single image. Using this depth information one can derive the 3D coordinates of points corresponding to all the detected marker points in the camera image. We present the conditions for an unambiguous determination of the depth. The system has been tested under realistic factory hall conditions in an automated bin picking scenario using a vision guided robot.","PeriodicalId":236208,"journal":{"name":"2011 IEEE International Conference on Automation Science and Engineering","volume":"97 1 Suppl 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE International Conference on Automation Science and Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASE.2011.6042450","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We present a novel low-cost device for scanning object surfaces based on structured LED (Light Emitting Diode) light and a single, monocular image from a standard machine vision camera. During the calibration phase we find the displacement of the imaged LED-projected marker points as a function of depth. Using this displacement data and a suitable interpolation technique, the depth of surface points can be derived from a single image. Using this depth information one can derive the 3D coordinates of points corresponding to all the detected marker points in the camera image. We present the conditions for an unambiguous determination of the depth. The system has been tested under realistic factory hall conditions in an automated bin picking scenario using a vision guided robot.