{"title":"运动分割:一种协同方法","authors":"C. Fermüller, T. Brodský, Y. Aloimonos","doi":"10.1109/CVPR.1999.784633","DOIUrl":null,"url":null,"abstract":"Since estimation of camera motion requires knowledge of independent motion, and moving object detection and localization requires knowledge about the camera motion, the two problems of motion estimation and segmentation need to be solved together in a synergistic manner. This paper provides an approach to treating both these problems simultaneously. The technique introduced here is based on a novel concept, \"scene ruggedness\" which parameterizes the variation in estimated scene depth with the error in the underlying three-dimensional (3D) motion. The idea is that incorrect 3D motion estimates cause distortions in the estimated depth map, and as a result smooth scene patches are computed as rugged surfaces. The correct 3D motion can be distinguished, as it does not cause any distortion and thus gives rise to the background patches with the least depth variation between depth discontinuities, with the locations corresponding to independent motion being rugged. The algorithm presented employs a binocular observer whose nature is exploited in the extraction of depth discontinuities, a step that facilitates the overall procedure, but the technique can be extended to a monocular observer in a variety of ways.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"30 1","pages":"226-231"},"PeriodicalIF":0.0000,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Motion segmentation: a synergistic approach\",\"authors\":\"C. Fermüller, T. Brodský, Y. Aloimonos\",\"doi\":\"10.1109/CVPR.1999.784633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Since estimation of camera motion requires knowledge of independent motion, and moving object detection and localization requires knowledge about the camera motion, the two problems of motion estimation and segmentation need to be solved together in a synergistic manner. This paper provides an approach to treating both these problems simultaneously. The technique introduced here is based on a novel concept, \\\"scene ruggedness\\\" which parameterizes the variation in estimated scene depth with the error in the underlying three-dimensional (3D) motion. The idea is that incorrect 3D motion estimates cause distortions in the estimated depth map, and as a result smooth scene patches are computed as rugged surfaces. The correct 3D motion can be distinguished, as it does not cause any distortion and thus gives rise to the background patches with the least depth variation between depth discontinuities, with the locations corresponding to independent motion being rugged. The algorithm presented employs a binocular observer whose nature is exploited in the extraction of depth discontinuities, a step that facilitates the overall procedure, but the technique can be extended to a monocular observer in a variety of ways.\",\"PeriodicalId\":20644,\"journal\":{\"name\":\"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)\",\"volume\":\"30 1\",\"pages\":\"226-231\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-06-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPR.1999.784633\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR.1999.784633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Since estimation of camera motion requires knowledge of independent motion, and moving object detection and localization requires knowledge about the camera motion, the two problems of motion estimation and segmentation need to be solved together in a synergistic manner. This paper provides an approach to treating both these problems simultaneously. The technique introduced here is based on a novel concept, "scene ruggedness" which parameterizes the variation in estimated scene depth with the error in the underlying three-dimensional (3D) motion. The idea is that incorrect 3D motion estimates cause distortions in the estimated depth map, and as a result smooth scene patches are computed as rugged surfaces. The correct 3D motion can be distinguished, as it does not cause any distortion and thus gives rise to the background patches with the least depth variation between depth discontinuities, with the locations corresponding to independent motion being rugged. The algorithm presented employs a binocular observer whose nature is exploited in the extraction of depth discontinuities, a step that facilitates the overall procedure, but the technique can be extended to a monocular observer in a variety of ways.