{"title":"Refinement of disparity estimates through the fusion of monocular image segmentations","authors":"D. McKeown, F. Perlant","doi":"10.1109/CVPR.1992.223146","DOIUrl":null,"url":null,"abstract":"The authors examine how estimates of three-dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. They describe the utilization of surface illumination information provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Refinement results on complex urban scenes containing various man-made and natural features are presented, and the improvements due to monocular fusion with a set of different region-based image segmentations are demonstrated.<<ETX>>","PeriodicalId":325476,"journal":{"name":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","volume":"452 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.1992.223146","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
The authors examine how estimates of three-dimensional scene structure, as encoded in a scene disparity map, can be improved by the analysis of the original monocular imagery. They describe the utilization of surface illumination information provided by the segmentation of the monocular image into fine surface patches of nearly homogeneous intensity to remove mismatches generated during stereo matching. These patches are used to guide a statistical analysis of the disparity map based on the assumption that such patches correspond closely with physical surfaces in the scene. Such a technique is quite independent of whether the initial disparity map was generated by automated area-based or feature-based stereo matching. Refinement results on complex urban scenes containing various man-made and natural features are presented, and the improvements due to monocular fusion with a set of different region-based image segmentations are demonstrated.<>