Reports a system for detecting human like moving objects in time-varying images. The authors show how it is possible to detect the image trajectories of people moving in ordinary indoor scenes. The system consists of three subprocesses: changing region detection, moving object tracking and movement interpretation. The processes are executed in parallel so hat each one can recover from the others' errors. This ensures the reliable detection of the trajectories in difficult cases such as movement across complicated backgrounds. The authors have built a trial detection system using a parallel image processing system. The details of the trial system and experimental results of walking person detection are described.<>
{"title":"Multiple object tracking system with three level continuous processes","authors":"K. Fukui, H. Nakai, Y. Kuno","doi":"10.1109/ACV.1992.240331","DOIUrl":"https://doi.org/10.1109/ACV.1992.240331","url":null,"abstract":"Reports a system for detecting human like moving objects in time-varying images. The authors show how it is possible to detect the image trajectories of people moving in ordinary indoor scenes. The system consists of three subprocesses: changing region detection, moving object tracking and movement interpretation. The processes are executed in parallel so hat each one can recover from the others' errors. This ensures the reliable detection of the trajectories in difficult cases such as movement across complicated backgrounds. The authors have built a trial detection system using a parallel image processing system. The details of the trial system and experimental results of walking person detection are described.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134268548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors have developed a machine to perform the task of automatic registration of color separation films, a process manually performed by skilled professionals in the graphics arts printing industry. The development of such a machine requires overcoming significant challenges: designing a sound computer vision methodology while respecting hard timing constraints, transferring software across platforms and languages, validating the software, building the actual machine around the algorithms, testing the conformity to tolerances, educating operators on the use of such a machine, and having a system robust enough to operate around the clock with no technical supervision. The authors present a brief overview of the problem, followed by the answers they provided to the challenges above.<>
{"title":"The RegiStar Machine: from conception to installation","authors":"G. Medioni, A. Huertas, Monti R. Wilson","doi":"10.1109/ACV.1992.240307","DOIUrl":"https://doi.org/10.1109/ACV.1992.240307","url":null,"abstract":"The authors have developed a machine to perform the task of automatic registration of color separation films, a process manually performed by skilled professionals in the graphics arts printing industry. The development of such a machine requires overcoming significant challenges: designing a sound computer vision methodology while respecting hard timing constraints, transferring software across platforms and languages, validating the software, building the actual machine around the algorithms, testing the conformity to tolerances, educating operators on the use of such a machine, and having a system robust enough to operate around the clock with no technical supervision. The authors present a brief overview of the problem, followed by the answers they provided to the challenges above.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130047461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fully automatic road recognition remains an elusive goal in spite of many years of research. Most practical systems today use tedious manual tracing for the entry of data from satellite and aerial images to geographical data bases. The paper presents a semi-automatic method for the entry of such data. First ribbons of high contrast are found by analyzing gray scale surface principal curvatures. Then, pixels belonging to such ribbons are fitted by conic splines, and then a graph is constructed whose nodes are end points of the arcs fitted by the splines. The key new idea is to assign edges between all nodes and label them with a cost function based on physical constraints on roads. Once a pair of end points is chosen, a shortest path algorithm is used to determine the road between them. Thus a global optimization is performed over all possible candidates.<>
{"title":"Interactive road finding for aerial images","authors":"Jianying Hu, Bill Sakoda, T. Pavlidis","doi":"10.1109/ACV.1992.240327","DOIUrl":"https://doi.org/10.1109/ACV.1992.240327","url":null,"abstract":"Fully automatic road recognition remains an elusive goal in spite of many years of research. Most practical systems today use tedious manual tracing for the entry of data from satellite and aerial images to geographical data bases. The paper presents a semi-automatic method for the entry of such data. First ribbons of high contrast are found by analyzing gray scale surface principal curvatures. Then, pixels belonging to such ribbons are fitted by conic splines, and then a graph is constructed whose nodes are end points of the arcs fitted by the splines. The key new idea is to assign edges between all nodes and label them with a cost function based on physical constraints on roads. Once a pair of end points is chosen, a shortest path algorithm is used to determine the road between them. Thus a global optimization is performed over all possible candidates.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121239375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scanning probe microscopy (SXM), which includes techniques such as scanning tunneling microscopy (STM) and scanning force microscopy (SFM), is becoming popular for 3D metrology in the semiconductor industry and for high resolution 3D imaging of surfaces in Materials Science and Biology. The authors present imaging models for SXM that take into account the effect of probe geometry on topographic images produced by SXM in 'contact' and 'non-contact' modes. The authors formulate methods for restoring an SXM image to obtain the original surface. Criteria for determining certainty of restoration are developed. It is shown that the methods developed can be expressed in terms of gray scale morphological operators. The efficacy of the approach is demonstrated by applying it to synthetic and real data.<>
{"title":"Restoration of scanning probe microscope images","authors":"G. Pingali, R. Jain","doi":"10.1109/ACV.1992.240301","DOIUrl":"https://doi.org/10.1109/ACV.1992.240301","url":null,"abstract":"Scanning probe microscopy (SXM), which includes techniques such as scanning tunneling microscopy (STM) and scanning force microscopy (SFM), is becoming popular for 3D metrology in the semiconductor industry and for high resolution 3D imaging of surfaces in Materials Science and Biology. The authors present imaging models for SXM that take into account the effect of probe geometry on topographic images produced by SXM in 'contact' and 'non-contact' modes. The authors formulate methods for restoring an SXM image to obtain the original surface. Criteria for determining certainty of restoration are developed. It is shown that the methods developed can be expressed in terms of gray scale morphological operators. The efficacy of the approach is demonstrated by applying it to synthetic and real data.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":" 736","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113946944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Addresses the problem of extracting polyhedral building structures from a stereo pair of aerial intensity images. The authors describe a system that computes a hierarchy of descriptions such as segments, junctions, and links between junctions from each view, and matches these features at the different levels. Such high level features not only help reduce correspondence ambiguity during stereo matching, but also allow us to infer surface boundaries even though the boundaries may be broken because of noise and weak contrast. The authors hypothesize surface boundaries by examining global information such as continuity and coplanarity of linked edges in 3-D, rather than merely by looking at local depth information. When the walls of the buildings are visible, they also exploit the relationship among adjacent surfaces in a polyhedral object to help confirm the different levels of descriptions. The authors give some experimental results for aerial images taken from overhead views and oblique views.<>
{"title":"Recovering building structures from stereo","authors":"Ronald Chung, R. Nevatia","doi":"10.1109/ACV.1992.240326","DOIUrl":"https://doi.org/10.1109/ACV.1992.240326","url":null,"abstract":"Addresses the problem of extracting polyhedral building structures from a stereo pair of aerial intensity images. The authors describe a system that computes a hierarchy of descriptions such as segments, junctions, and links between junctions from each view, and matches these features at the different levels. Such high level features not only help reduce correspondence ambiguity during stereo matching, but also allow us to infer surface boundaries even though the boundaries may be broken because of noise and weak contrast. The authors hypothesize surface boundaries by examining global information such as continuity and coplanarity of linked edges in 3-D, rather than merely by looking at local depth information. When the walls of the buildings are visible, they also exploit the relationship among adjacent surfaces in a polyhedral object to help confirm the different levels of descriptions. The authors give some experimental results for aerial images taken from overhead views and oblique views.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129893312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The authors develop and analyze high-speed algorithms for the detection of point targets in infrared (IR) images with spatially varying clutter. Current target detection systems are effective in detecting bright targets in a uniform sky, but in areas of strong clutter are either unable to detect targets reliably or are limited by high false alarm rates. The authors assume that target and sensor models are available. Clutter is considered to be poorly characterized and spatially varying. Target detection algorithms are based on filtering to enhance the target signal relative to the background, followed by an adaptive threshold. Statistical analysis of the algorithms is provided to quantify algorithm performance. The system implements a spatially adaptive algorithm that maximizes probability of target detection while maintaining a fixed false alarm rate. The algorithms are robust in the presence of spatially varying clutter. The authors include experimental results to illustrate this.<>
{"title":"Point target detection in spatially varying clutter","authors":"S. Sridhar, G. Healey","doi":"10.1109/ACV.1992.240306","DOIUrl":"https://doi.org/10.1109/ACV.1992.240306","url":null,"abstract":"The authors develop and analyze high-speed algorithms for the detection of point targets in infrared (IR) images with spatially varying clutter. Current target detection systems are effective in detecting bright targets in a uniform sky, but in areas of strong clutter are either unable to detect targets reliably or are limited by high false alarm rates. The authors assume that target and sensor models are available. Clutter is considered to be poorly characterized and spatially varying. Target detection algorithms are based on filtering to enhance the target signal relative to the background, followed by an adaptive threshold. Statistical analysis of the algorithms is provided to quantify algorithm performance. The system implements a spatially adaptive algorithm that maximizes probability of target detection while maintaining a fixed false alarm rate. The algorithms are robust in the presence of spatially varying clutter. The authors include experimental results to illustrate this.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122488087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Fossa, E. Grosso, F. Ferrari, M. Magrassi, G. Sandini, M. Zapendouski
The paper describes the practical implementation of a vision-based navigation system for a mobile robot operating in indoor environments. The robot acquires visual information by means of three CCD cameras mounted on board. A stereo pair is used for ground plane obstacle detection and avoidance, while the third camera is used to locate landmarks and compute the robot's position. Odometric readings are used to guide visual perception by simple 'where to look next' strategies. The whole processing and the control architecture determining the overall behaviour of the robot are mainly implemented on a parallel MIMD machine. Some examples are presented, showing how the robot moves in a partially structured environment reaching the specified goal points with a fair degree of accuracy, avoiding unpredicted obstacles and following trajectories obtained through the cooperation of the various navigation modules running in parallel.<>
{"title":"A visually guided mobile robot acting in indoor environments","authors":"M. Fossa, E. Grosso, F. Ferrari, M. Magrassi, G. Sandini, M. Zapendouski","doi":"10.1109/ACV.1992.240298","DOIUrl":"https://doi.org/10.1109/ACV.1992.240298","url":null,"abstract":"The paper describes the practical implementation of a vision-based navigation system for a mobile robot operating in indoor environments. The robot acquires visual information by means of three CCD cameras mounted on board. A stereo pair is used for ground plane obstacle detection and avoidance, while the third camera is used to locate landmarks and compute the robot's position. Odometric readings are used to guide visual perception by simple 'where to look next' strategies. The whole processing and the control architecture determining the overall behaviour of the robot are mainly implemented on a parallel MIMD machine. Some examples are presented, showing how the robot moves in a partially structured environment reaching the specified goal points with a fair degree of accuracy, avoiding unpredicted obstacles and following trajectories obtained through the cooperation of the various navigation modules running in parallel.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126496404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancement of three-dimensional machine vision is closely related to the development of robust and efficient shape recovery methods. The author addresses the recovery problem associated with three different classes of surfaces: (a) specular surfaces; (b) surfaces with varying reflectance; and (c) rough and textured surfaces. Three realtime machine vision systems have been developed based on these results. Experimental results demonstrate that the proposed methods and systems are applicable to a variety of visual inspection problems.<>
{"title":"Shape recovery methods for visual inspection","authors":"S. Nayar","doi":"10.1109/ACV.1992.240318","DOIUrl":"https://doi.org/10.1109/ACV.1992.240318","url":null,"abstract":"The advancement of three-dimensional machine vision is closely related to the development of robust and efficient shape recovery methods. The author addresses the recovery problem associated with three different classes of surfaces: (a) specular surfaces; (b) surfaces with varying reflectance; and (c) rough and textured surfaces. Three realtime machine vision systems have been developed based on these results. Experimental results demonstrate that the proposed methods and systems are applicable to a variety of visual inspection problems.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"57 6 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123386476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes a vision system for automatic inspection of the connecting part of the wire bond of an IC where the wire connects to the bond pad on the chip. It considers a popular type of such bonds known as 'ball bond'. Using two-dimensional images taken from the top of the IC wafer, the system determines several geometric measures which are important in determining the quality of the bond. These measures include the boundary, length of major and minor axes of the best fitting ellipse and the center. The process utilizes automatic thresholding, morphological operations and geometric moments of the image. Success of the method is demonstrated through experimental studies on actual bonds.<>
{"title":"A vision system for inspection of ball bonds in integrated circuits","authors":"A. Khotanzad, H. Banerjee, M. Srinath","doi":"10.1109/ACV.1992.240300","DOIUrl":"https://doi.org/10.1109/ACV.1992.240300","url":null,"abstract":"The paper describes a vision system for automatic inspection of the connecting part of the wire bond of an IC where the wire connects to the bond pad on the chip. It considers a popular type of such bonds known as 'ball bond'. Using two-dimensional images taken from the top of the IC wafer, the system determines several geometric measures which are important in determining the quality of the bond. These measures include the boundary, length of major and minor axes of the best fitting ellipse and the center. The process utilizes automatic thresholding, morphological operations and geometric moments of the image. Success of the method is demonstrated through experimental studies on actual bonds.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114550655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In microscopy, a common task is the identification of individual objects having some particular shape, after which various features can be measured and feature statistics taken over the set of objects. The identification process can be automated by applying appropriate computer vision techniques. The author addresses the specific problem of fiber identification. Fibers appear as thin lines or curves in an image. In a 3D graph or 'surface plot' of the image, they would appear as ridges or valleys. The paper describes a method of finding fibers based on the detection of individual ridge 'edgels'; grouping of these edgels into simple, generally non-overlapping, curves; and the further grouping of curves into extended fibers.<>
{"title":"Fiber identification in microscopy by ridge detection and grouping","authors":"F. Glazer","doi":"10.1109/ACV.1992.240310","DOIUrl":"https://doi.org/10.1109/ACV.1992.240310","url":null,"abstract":"In microscopy, a common task is the identification of individual objects having some particular shape, after which various features can be measured and feature statistics taken over the set of objects. The identification process can be automated by applying appropriate computer vision techniques. The author addresses the specific problem of fiber identification. Fibers appear as thin lines or curves in an image. In a 3D graph or 'surface plot' of the image, they would appear as ridges or valleys. The paper describes a method of finding fibers based on the detection of individual ridge 'edgels'; grouping of these edgels into simple, generally non-overlapping, curves; and the further grouping of curves into extended fibers.<<ETX>>","PeriodicalId":153393,"journal":{"name":"[1992] Proceedings IEEE Workshop on Applications of Computer Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127267792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}