Several models of statistical estimation of motion from visual input are derived and analyzed theoretically and experimentally. We study a wide variety of models, ones that use least squares and ones that use maximum likelihood, with several different assumptions (dependent and independent noise, isotropic and non-isotropic noise), spherical and planar image surfaces, and different preprocessing (one based on correspondence and one based on disparity). We do all this analysis using only a few fundamental concepts from statistical estimation, so the relative merits and shortcomings of all the methods become evident. The experimental results provide a quantitative measure of these merits.
{"title":"Models of Statistical Visual Motion Estimation","authors":"Spetsakis M.","doi":"10.1006/ciun.1994.1059","DOIUrl":"10.1006/ciun.1994.1059","url":null,"abstract":"<div><p>Several models of statistical estimation of motion from visual input are derived and analyzed theoretically and experimentally. We study a wide variety of models, ones that use least squares and ones that use maximum likelihood, with several different assumptions (dependent and independent noise, isotropic and non-isotropic noise), spherical and planar image surfaces, and different preprocessing (one based on correspondence and one based on disparity). We do all this analysis using only a few fundamental concepts from statistical estimation, so the relative merits and shortcomings of all the methods become evident. The experimental results provide a quantitative measure of these merits.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 3","pages":"Pages 300-312"},"PeriodicalIF":0.0,"publicationDate":"1994-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1059","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86573163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Paper by R. M. Haralick","authors":"Cinque L., Guerra C., Levialdi S.","doi":"10.1006/ciun.1994.1051","DOIUrl":"10.1006/ciun.1994.1051","url":null,"abstract":"","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 2","pages":"Pages 250-252"},"PeriodicalIF":0.0,"publicationDate":"1994-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84063292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work investigates a new approach to the tracking of regions in an image sequence. The approach relies on two successive operations: detection and discrimination of moving targets and then pursuit of the targets. A motion-based segmentation algorithm, previously developed in the laboratory, provides the detection and discrimination stage. This paper emphasizes the pursuit stage. A pursuit algorithm has been designed that directly tracks the region representing the projection of a moving object in the image, rather than relying on the set of trajectories of individual points or segments. The region tracking is based on the dense estimation of an affine model of the motion field within each region, which makes it possible to predict the position of the target in the next frame. A multiresolution scheme provides reliable estimates of the motion parameters, even in the case of large displacements. Two interacting linear dynamic systems describe the temporal evolution of the geometry and the motion of the tracked regions. Experiments conducted on real images demonstrate that the approach is robust against occlusion and can handle large interframe displacements and complex motions.
{"title":"Region-Based Tracking Using Affine Motion Models in Long Image Sequences","authors":"Meyer F.G., Bouthemy P.","doi":"10.1006/ciun.1994.1042","DOIUrl":"10.1006/ciun.1994.1042","url":null,"abstract":"<div><p>This work investigates a new approach to the tracking of regions in an image sequence. The approach relies on two successive operations: detection and discrimination of moving targets and then pursuit of the targets. A motion-based segmentation algorithm, previously developed in the laboratory, provides the detection and discrimination stage. This paper emphasizes the pursuit stage. A pursuit algorithm has been designed that directly tracks the region representing the projection of a moving object in the image, rather than relying on the set of trajectories of individual points or segments. The region tracking is based on the dense estimation of an affine model of the motion field within each region, which makes it possible to predict the position of the target in the next frame. A multiresolution scheme provides reliable estimates of the motion parameters, even in the case of large displacements. Two interacting linear dynamic systems describe the temporal evolution of the geometry and the motion of the tracked regions. Experiments conducted on real images demonstrate that the approach is robust against occlusion and can handle large interframe displacements and complex motions.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 2","pages":"Pages 119-140"},"PeriodicalIF":0.0,"publicationDate":"1994-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74594958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scale-space representation is a topic of active research in computer vision. The focus of the research so far has been on coarse-to-fine focusing methods, image reconstruction, and computational aspects. However, not much work has been done on the signal detection problem, i.e., detecting the presence or absence of signal models from noisy image scans using the scale-space. In this paper we propose four 1-D signal detection algorithms for separating pulse signals in an image scan from the background in the scale-space domain. These algorithms do not need any thresholding to detect the zero-crossings (zc′s) at any of the scales. The different algorithms are applicable to image scans with different noise and clutter characteristics. A simple algorithm works best for scans having low noise and clutter. When noise and clutter increase sufficiently, a more sophisticated algorithm must be used. The 1-D algorithms for pulse and edge detection can be used to detect 2-D closed objects in cluttered and noisy backgrounds. This is done by scanning the image row-wise (and column-wise) and working on the individual scans. Using this method, the algorithms are demonstrated on several real life images. Another objective of this paper is to conduct comparative analysis of (i) a single-scale system vs a multiscale system and (ii) white noise vs clutter. This is done by conducting an experimental statistical analysis on single-scale and multiscale systems corrupted by white noise or clutter. Performance indices such as probability of detection, probability of false alarms, and delocalization errors are computed. The results indicate that (i) the multiscale approach is better than the single-scale approach and (ii) the degradation in performance is greater with clutter than with white noise.
{"title":"Performance Analysis of 1-D Scale-Space Algorithms for Pulse Detection in Noisy Image Scans","authors":"Topkar V., Sood A.K., Kjell B.","doi":"10.1006/ciun.1994.1047","DOIUrl":"10.1006/ciun.1994.1047","url":null,"abstract":"<div><p>Scale-space representation is a topic of active research in computer vision. The focus of the research so far has been on coarse-to-fine focusing methods, image reconstruction, and computational aspects. However, not much work has been done on the signal detection problem, i.e., detecting the presence or absence of signal models from noisy image scans using the scale-space. In this paper we propose four 1-D signal detection algorithms for separating pulse signals in an image scan from the background in the scale-space domain. These algorithms do not need any thresholding to detect the zero-crossings (zc′s) at any of the scales. The different algorithms are applicable to image scans with different noise and clutter characteristics. A simple algorithm works best for scans having low noise and clutter. When noise and clutter increase sufficiently, a more sophisticated algorithm must be used. The 1-D algorithms for pulse and edge detection can be used to detect 2-D closed objects in cluttered and noisy backgrounds. This is done by scanning the image row-wise (and column-wise) and working on the individual scans. Using this method, the algorithms are demonstrated on several real life images. Another objective of this paper is to conduct comparative analysis of (i) a single-scale system vs a multiscale system and (ii) white noise vs clutter. This is done by conducting an experimental statistical analysis on single-scale and multiscale systems corrupted by white noise or clutter. Performance indices such as probability of detection, probability of false alarms, and delocalization errors are computed. The results indicate that (i) the multiscale approach is better than the single-scale approach and (ii) the degradation in performance is greater with clutter than with white noise.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 2","pages":"Pages 191-209"},"PeriodicalIF":0.0,"publicationDate":"1994-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1047","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89755272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer Vision: The Goal and the Means","authors":"Meer P.","doi":"10.1006/ciun.1994.1053","DOIUrl":"10.1006/ciun.1994.1053","url":null,"abstract":"","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 2","pages":"Pages 257-259"},"PeriodicalIF":0.0,"publicationDate":"1994-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1053","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82666363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present an approach to the nonrigid shape-from-motion problem for surfaces in 3-space that involves incremental approximations. Specifically, assuming we know the shape of a surface before a nonrigid motion, we show how we can use monocular perspective images of the surface taken before and after the motion to obtain arbitrarily good approximations to both shape and motion parameters. We also present results obtained by implementing our method on images of real nonrigid motions.
{"title":"The Incremental Approximation of Nonrigid Motion","authors":"Penna M.A.","doi":"10.1006/ciun.1994.1043","DOIUrl":"10.1006/ciun.1994.1043","url":null,"abstract":"<div><p>In this paper we present an approach to the nonrigid shape-from-motion problem for surfaces in 3-space that involves incremental approximations. Specifically, assuming we know the shape of a surface before a nonrigid motion, we show how we can use monocular perspective images of the surface taken before and after the motion to obtain arbitrarily good approximations to both shape and motion parameters. We also present results obtained by implementing our method on images of real nonrigid motions.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"60 2","pages":"Pages 141-156"},"PeriodicalIF":0.0,"publicationDate":"1994-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1994.1043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73185277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}