Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10062
Yingqiang Lin, B. Bhanu
In this paper, genetic programming (GP) is applied to synthesize composite operators from primitive operators and primitive features for object detection. To improve the efficiency of GP, smart crossover, smart mutation and a public library are proposed to identify and keep the effective components of composite operators. To prevent code bloat and avoid severe restriction on the GP search, a MDL-based fitness function is designed to incorporate the size of composite operator into the fitness evaluation process. The experiments with real synthetic aperture radar (SAR) images show that compared to normal GP, GP algorithm proposed here finds effective composite operators more quickly.
{"title":"MDL-based Genetic Programming for Object Detection","authors":"Yingqiang Lin, B. Bhanu","doi":"10.1109/CVPRW.2003.10062","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10062","url":null,"abstract":"In this paper, genetic programming (GP) is applied to synthesize composite operators from primitive operators and primitive features for object detection. To improve the efficiency of GP, smart crossover, smart mutation and a public library are proposed to identify and keep the effective components of composite operators. To prevent code bloat and avoid severe restriction on the GP search, a MDL-based fitness function is designed to incorporate the size of composite operator into the fitness evaluation process. The experiments with real synthetic aperture radar (SAR) images show that compared to normal GP, GP algorithm proposed here finds effective composite operators more quickly.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131088682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10035
Vera M. Kettnaker
We propose a new Hidden Markov Model with time-dependent states. Estimation of this model is shown to be as fast and easy as the estimation of regular HMMs. We demonstrate the usefulness and feasibility of such time-dependent HMMs with an application in which illegitimate access to personnel-only rooms in airports etc. can be distinguished from access by legitimate personnel, based on differences in the time of access or differences in the motion trajectories.
{"title":"Time-dependent HMMs for visual intrusion detection","authors":"Vera M. Kettnaker","doi":"10.1109/CVPRW.2003.10035","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10035","url":null,"abstract":"We propose a new Hidden Markov Model with time-dependent states. Estimation of this model is shown to be as fast and easy as the estimation of regular HMMs. We demonstrate the usefulness and feasibility of such time-dependent HMMs with an application in which illegitimate access to personnel-only rooms in airports etc. can be distinguished from access by legitimate personnel, based on differences in the time of access or differences in the motion trajectories.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115404108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10068
E. Menegatti, A. Scarpa, Dario Massarin, Enrico Ros, E. Pagello
This paper presents a system designed to cooperatively track and share the information about moving objects using a multi-robot team. Every robot of the team is fitted with a different omnidirectional vision system running at different frame rates. The information gathered from every robot is broadcast to all the other robots and every robot fuses its own measurements with the information received from the teammates, building its own "vision of the world". The cooperation of the vision sensors enhances the capabilities of the single vision sensor. This work was implemented in the RoboCup domain, using our team of heterogeneous robot, but the approach is very general and can be used in any application where a team of robot has to track multiple objects. The system is designed to work with heterogeneous vision systems both in the camera design and in computational resources. Experiments in real game scenarios are presented.
{"title":"Omnidirectional Distributed Vision System for a Team of Heterogeneous Robots","authors":"E. Menegatti, A. Scarpa, Dario Massarin, Enrico Ros, E. Pagello","doi":"10.1109/CVPRW.2003.10068","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10068","url":null,"abstract":"This paper presents a system designed to cooperatively track and share the information about moving objects using a multi-robot team. Every robot of the team is fitted with a different omnidirectional vision system running at different frame rates. The information gathered from every robot is broadcast to all the other robots and every robot fuses its own measurements with the information received from the teammates, building its own \"vision of the world\". The cooperation of the vision sensors enhances the capabilities of the single vision sensor. This work was implemented in the RoboCup domain, using our team of heterogeneous robot, but the approach is very general and can be used in any application where a team of robot has to track multiple objects. The system is designed to work with heterogeneous vision systems both in the camera design and in computational resources. Experiments in real game scenarios are presented.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116074727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10011
Luciano Silva, O. Bellon, K. Boyer, P. Gotardo
In digital Archaeology, the 3D modeling of physical objects from range views is an important issue. Generally, the applications demand a great number of views to create a precise 3D model through a registration process. Most range image registration techniques are based on variants of the ICP (Iterative Closest Point) algorithm. The ICP algorithm has two main drawbacks: the possibility of convergence to a local minimum, and the need to prealign the images. Genetic Algorithms (GAs) were recently applied to range image registration providing good convergence results without the constraints observed in the ICP approaches. To improve range image registration, we explore the use of GAs and develop a novel approach that combines a GA with hillclimbing heuristics (GH). The experimental results show that our method is effective in aligning low overlap views and yield more accurate registration results than either ICP or standard GA approaches. Our method is highly advantageous in archaeological applications, where it is necessary to reduce the number of views to be aligned because data acquisition is expensive and also to minimize error accumulation in the 3D model. We also present a new measure of surface interpenetration with which to evaluate the registration and prove its utility with experimental results.
{"title":"Low-Overlap Range Image Registration for Archaeological Applications","authors":"Luciano Silva, O. Bellon, K. Boyer, P. Gotardo","doi":"10.1109/CVPRW.2003.10011","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10011","url":null,"abstract":"In digital Archaeology, the 3D modeling of physical objects from range views is an important issue. Generally, the applications demand a great number of views to create a precise 3D model through a registration process. Most range image registration techniques are based on variants of the ICP (Iterative Closest Point) algorithm. The ICP algorithm has two main drawbacks: the possibility of convergence to a local minimum, and the need to prealign the images. Genetic Algorithms (GAs) were recently applied to range image registration providing good convergence results without the constraints observed in the ICP approaches. To improve range image registration, we explore the use of GAs and develop a novel approach that combines a GA with hillclimbing heuristics (GH). The experimental results show that our method is effective in aligning low overlap views and yield more accurate registration results than either ICP or standard GA approaches. Our method is highly advantageous in archaeological applications, where it is necessary to reduce the number of views to be aligned because data acquisition is expensive and also to minimize error accumulation in the 3D model. We also present a new measure of surface interpenetration with which to evaluate the registration and prove its utility with experimental results.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122807873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10021
Jamie L. Rothfeder, Shaolei Feng, T. Rath
Libraries contain enormous amounts of handwritten historical documents which cannot be made available on-line because they do not have a searchable index. The wordspotting idea has previously been proposed as a solution to creating indexes for such documents and collections by matching word images. In this paper we present an algorithm which compares whole word-images based on their appearance. This algorithm recovers correspondences of points of interest in two images, and then uses these correspondences to construct a similarity measure. This similarity measure can then be used to rank word-images in order of their closeness to a querying image. We achieved an average precision of 62.57% on a set of 2372 images of reasonable quality and an average precision of 15.49% on a set of 3262 images from documents of poor quality that are even hard to read for humans.
{"title":"Using Corner Feature Correspondences to Rank Word Images by Similarity","authors":"Jamie L. Rothfeder, Shaolei Feng, T. Rath","doi":"10.1109/CVPRW.2003.10021","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10021","url":null,"abstract":"Libraries contain enormous amounts of handwritten historical documents which cannot be made available on-line because they do not have a searchable index. The wordspotting idea has previously been proposed as a solution to creating indexes for such documents and collections by matching word images. In this paper we present an algorithm which compares whole word-images based on their appearance. This algorithm recovers correspondences of points of interest in two images, and then uses these correspondences to construct a similarity measure. This similarity measure can then be used to rank word-images in order of their closeness to a querying image. We achieved an average precision of 62.57% on a set of 2372 images of reasonable quality and an average precision of 15.49% on a set of 3262 images from documents of poor quality that are even hard to read for humans.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"369 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122774397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10090
Xiaojin Shi, R. Manduchi
We consider here the problem of image classification when more than one visual feature are available. In these cases, Bayes fusion offers an attractive solution by combining the results of different classifiers (one classifier per feature). This is a general form of the so-called "naive Bayes" approach. Analyzing the performance of Bayes fusion with respect to a Bayesian classifier over the joint feature distribution, however, is tricky. On the one hand, it is well-known that the latter has lower bias than the former, unless the features are conditionally independent, in which case the two coincide. On the other hand, as noted by Friedman, the low variance associated with naive Bayes estimation may dramatically mitigate the effect of its bias. In this paper, we attempt to assess the tradeoff between these two factors by means of experimental tests on two image data sets using color and texture features. Our results suggest that (1) the difference between the correct classification rates using Bayes fusion and using the joint feature distribution is a function of the conditional dependence of the features (measured in terms of mutual information), however: (2) for small training data size, Bayes fusion performs almost as well as the classifier on the joint distribution.
{"title":"A Study on Bayes Feature Fusion for Image Classification","authors":"Xiaojin Shi, R. Manduchi","doi":"10.1109/CVPRW.2003.10090","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10090","url":null,"abstract":"We consider here the problem of image classification when more than one visual feature are available. In these cases, Bayes fusion offers an attractive solution by combining the results of different classifiers (one classifier per feature). This is a general form of the so-called \"naive Bayes\" approach. Analyzing the performance of Bayes fusion with respect to a Bayesian classifier over the joint feature distribution, however, is tricky. On the one hand, it is well-known that the latter has lower bias than the former, unless the features are conditionally independent, in which case the two coincide. On the other hand, as noted by Friedman, the low variance associated with naive Bayes estimation may dramatically mitigate the effect of its bias. In this paper, we attempt to assess the tradeoff between these two factors by means of experimental tests on two image data sets using color and texture features. Our results suggest that (1) the difference between the correct classification rates using Bayes fusion and using the joint feature distribution is a function of the conditional dependence of the features (measured in terms of mutual information), however: (2) for small training data size, Bayes fusion performs almost as well as the classifier on the joint distribution.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116943711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10091
R. M. Steele, C. Jaynes
We present a novel matchpoint acquisition method capable of producing accurate correspondences at subpixel precision. Given the known representation of the point to be matched, such as a projected fiducial in a structured light system, the method estimates the fiducial location and its expected uncertainty. Improved matchpoint precision has application in a number of calibration tasks, and uncertainty estimates can be used to significantly improve overall calibration results. A simple parametric model captures the relationship between the known fiducial and its corresponding position, shape, and intensity on the image plane. For each match-point pair, these unknown model parameters are recovered using maximum likelihood estimation to determine a sub-pixel center for the fiducial. The uncertainty of the match-point center is estimated by performing forward error analysis on the expected image noise. Uncertainty estimates used in conjunction with the accurate matchpoints can improve calibration accuracy for multi-view systems.
{"title":"Parametric Subpixel Matchpoint Recovery with Uncertainty Estimation: A Statistical Approach","authors":"R. M. Steele, C. Jaynes","doi":"10.1109/CVPRW.2003.10091","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10091","url":null,"abstract":"We present a novel matchpoint acquisition method capable of producing accurate correspondences at subpixel precision. Given the known representation of the point to be matched, such as a projected fiducial in a structured light system, the method estimates the fiducial location and its expected uncertainty. Improved matchpoint precision has application in a number of calibration tasks, and uncertainty estimates can be used to significantly improve overall calibration results. A simple parametric model captures the relationship between the known fiducial and its corresponding position, shape, and intensity on the image plane. For each match-point pair, these unknown model parameters are recovered using maximum likelihood estimation to determine a sub-pixel center for the fiducial. The uncertainty of the match-point center is estimated by performing forward error analysis on the expected image noise. Uncertainty estimates used in conjunction with the accurate matchpoints can improve calibration accuracy for multi-view systems.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128141634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10092
A. Roy-Chowdhury, R. Chellappa
A significant portion of recent research in computer vision has focused on issues related to sensitivity and robustness of existing techniques. In this paper, we study the classical structure from motion problem and analyze how the statistics representing the quality of the input video propagates through the reconstruction algorithm and affects the quality of the output reconstruction. Specifically, we show that it is possible to derive analytical expressions of the first and second order statistics (bias and error covariance) of the solution as a function of the statistics of the input. We concentrate on the case of reconstruction from a monocular video, where the small baseline makes any algorithm very susceptible to noise in the motion estimates from the video sequence. We derive an expression relating the error covariance of the reconstruction to the error covariance of the feature tracks in the input video. This is done using the implicit function theorem of real analysis and does not require strong statistical assumptions. Next, we prove that the 3D reconstruction is statistically biased, derive an expression for it and show that it is numerically significant. Combining these two results, we also establish a new bound on the minimum error in the depth reconstruction. We present the numerical significance of these analytical results on real video data.
{"title":"Statistical Error Propagation in 3D Modeling From Monocular Video","authors":"A. Roy-Chowdhury, R. Chellappa","doi":"10.1109/CVPRW.2003.10092","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10092","url":null,"abstract":"A significant portion of recent research in computer vision has focused on issues related to sensitivity and robustness of existing techniques. In this paper, we study the classical structure from motion problem and analyze how the statistics representing the quality of the input video propagates through the reconstruction algorithm and affects the quality of the output reconstruction. Specifically, we show that it is possible to derive analytical expressions of the first and second order statistics (bias and error covariance) of the solution as a function of the statistics of the input. We concentrate on the case of reconstruction from a monocular video, where the small baseline makes any algorithm very susceptible to noise in the motion estimates from the video sequence. We derive an expression relating the error covariance of the reconstruction to the error covariance of the feature tracks in the input video. This is done using the implicit function theorem of real analysis and does not require strong statistical assumptions. Next, we prove that the 3D reconstruction is statistically biased, derive an expression for it and show that it is numerically significant. Combining these two results, we also establish a new bound on the minimum error in the depth reconstruction. We present the numerical significance of these analytical results on real video data.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114539811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10094
B. Jedynak, Huicheng Zheng, M. Daoudi
We consider a sequence of three models for skin detection built from a large collection of labelled images. Each model is a maximum entropy model with respect to constraints concerning marginal distributions. Our models are nested. The first model is well known from practitioners. Pixels are considered as independent. The second model is a Hidden Markov Model. It includes constraints that force smoothness of the solution. The third model is a first order model. The full color gradient is included. Parameter estimation as well as optimization cannot be tackled without approximations. We use thoroughly Bethe tree approximation of the pixel lattice. Within it , parameter estimation is eradicated and the belief propagation algorithm permits to obtain exact and fast solution for skin probability at pixel locations. We then assess the performance on the Compaq database.
{"title":"Statistical Models for Skin Detection","authors":"B. Jedynak, Huicheng Zheng, M. Daoudi","doi":"10.1109/CVPRW.2003.10094","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10094","url":null,"abstract":"We consider a sequence of three models for skin detection built from a large collection of labelled images. Each model is a maximum entropy model with respect to constraints concerning marginal distributions. Our models are nested. The first model is well known from practitioners. Pixels are considered as independent. The second model is a Hidden Markov Model. It includes constraints that force smoothness of the solution. The third model is a first order model. The full color gradient is included. Parameter estimation as well as optimization cannot be tackled without approximations. We use thoroughly Bethe tree approximation of the pixel lattice. Within it , parameter estimation is eradicated and the belief propagation algorithm permits to obtain exact and fast solution for skin probability at pixel locations. We then assess the performance on the Compaq database.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123092958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-06-16DOI: 10.1109/CVPRW.2003.10008
J. McBride, B. Kimia
We present a novel approach to the problem of puzzle solving as it relates to archaeological fragment reconstruction. We begin with a set of broken fragments. In the first stage, we compare every pair of fragments and use partial curve matching to find similar portions of their respective boundaries. Partial curve matching is typically a very difficult problem because the specification of the partial curves are highly unconstrained and curve matching is computationally expensive. To address the first problem, we only consider matches which begin at fragment corners and then use curve-matching with normalized energy to determine how far the match extends. We also reduce computational cost by employing a multi-scale approach. This allows us to quickly generate many possible matches at a coarse scale and only keep the best ones to be matched again at a finer scale. In the second stage, we take a rank-ordered list of pairwise matches to search for a globally optimal arrangement. The search is based on a best-first strategy which adds fragments with the highest pairwise affinity first, but then evaluates their confidence as part of the global solution by rewarding the formation of triple junctions which are dominant in archaeological puzzles. To prevent failure due to the inclusion of spurious matches, we employ a standard beam-search to simultaneously expand on multiple solutions. Results on several cases are demonstrated.
{"title":"Archaeological Fragment Reconstruction Using Curve-Matching","authors":"J. McBride, B. Kimia","doi":"10.1109/CVPRW.2003.10008","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10008","url":null,"abstract":"We present a novel approach to the problem of puzzle solving as it relates to archaeological fragment reconstruction. We begin with a set of broken fragments. In the first stage, we compare every pair of fragments and use partial curve matching to find similar portions of their respective boundaries. Partial curve matching is typically a very difficult problem because the specification of the partial curves are highly unconstrained and curve matching is computationally expensive. To address the first problem, we only consider matches which begin at fragment corners and then use curve-matching with normalized energy to determine how far the match extends. We also reduce computational cost by employing a multi-scale approach. This allows us to quickly generate many possible matches at a coarse scale and only keep the best ones to be matched again at a finer scale. In the second stage, we take a rank-ordered list of pairwise matches to search for a globally optimal arrangement. The search is based on a best-first strategy which adds fragments with the highest pairwise affinity first, but then evaluates their confidence as part of the global solution by rewarding the formation of triple junctions which are dominant in archaeological puzzles. To prevent failure due to the inclusion of spurious matches, we employ a standard beam-search to simultaneously expand on multiple solutions. Results on several cases are demonstrated.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131529503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}