Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048329
J. Araque, Madelina Baena, Benjamin E. Chalela, David Navarro, Pedro R. Vizcaya
A method for synthesis of fingerprint images is presented. Global features are condensed in a linear model whose parameters are generated according to the statistical distribution of natural fingerprint patterns. When the major types of global patterns are considered independently, these parameters display a normal behavior. Local features are generated applying iteratively a finite state filter to an initial image. Results show that it is possible to control the location of the minutia in constant orientation regions and that variable orientation regions generate minutia on their own.
{"title":"Synthesis of fingerprint images","authors":"J. Araque, Madelina Baena, Benjamin E. Chalela, David Navarro, Pedro R. Vizcaya","doi":"10.1109/ICPR.2002.1048329","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048329","url":null,"abstract":"A method for synthesis of fingerprint images is presented. Global features are condensed in a linear model whose parameters are generated according to the statistical distribution of natural fingerprint patterns. When the major types of global patterns are considered independently, these parameters display a normal behavior. Local features are generated applying iteratively a finite state filter to an initial image. Results show that it is possible to control the location of the minutia in constant orientation regions and that variable orientation regions generate minutia on their own.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130324275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048307
Simon Günter, H. Bunke
Handwritten text recognition is one of the most difficult problems in the field of pattern recognition. The combination of multiple classifiers has been proven to be able to increase the recognition rate when compared to single classifiers. In this paper a new combination method for HMM based handwritten word recognizers is introduced. In contrast with many other multiple classifier combination schemes, where the combination takes place at the decision level, the proposed method combines various HMMs at a more elementary level. The usefulness of the new method is experimentally demonstrated in the context of a handwritten word recognition task.
{"title":"A new combination scheme for HMM-based classifiers and its application to handwriting recognition","authors":"Simon Günter, H. Bunke","doi":"10.1109/ICPR.2002.1048307","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048307","url":null,"abstract":"Handwritten text recognition is one of the most difficult problems in the field of pattern recognition. The combination of multiple classifiers has been proven to be able to increase the recognition rate when compared to single classifiers. In this paper a new combination method for HMM based handwritten word recognizers is introduced. In contrast with many other multiple classifier combination schemes, where the combination takes place at the decision level, the proposed method combines various HMMs at a more elementary level. The usefulness of the new method is experimentally demonstrated in the context of a handwritten word recognition task.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130365247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048371
S. Wachsmuth, G. Sagerer
Speech understanding and vision are the two most important modalities in human-human communication. However, the emulation of these by a computer faces fundamental difficulties due to noisy data, vague meanings, previously unseen objects or unheard words, occlusions, spontaneous speech effects, and context dependence. Thus, the interpretation processes on both channels are highly error-prone. This paper presents a new perspective on the problem of relating speech and image interpretations as a probabilistic decoding process. It is shown that such an integration scheme is robust regarding partial or erroneous interpretations. Furthermore, it is shown that implicit error correction strategies can be formulated in this probabilistic framework that lead to improved scene interpretation.
{"title":"Integrated analysis of speech and images as a probabilistic decoding process","authors":"S. Wachsmuth, G. Sagerer","doi":"10.1109/ICPR.2002.1048371","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048371","url":null,"abstract":"Speech understanding and vision are the two most important modalities in human-human communication. However, the emulation of these by a computer faces fundamental difficulties due to noisy data, vague meanings, previously unseen objects or unheard words, occlusions, spontaneous speech effects, and context dependence. Thus, the interpretation processes on both channels are highly error-prone. This paper presents a new perspective on the problem of relating speech and image interpretations as a probabilistic decoding process. It is shown that such an integration scheme is robust regarding partial or erroneous interpretations. Furthermore, it is shown that implicit error correction strategies can be formulated in this probabilistic framework that lead to improved scene interpretation.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130519539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048471
A. Bazen, S. H. Gerez
This paper presents a novel minutiae matching method that deals with elastic distortions by normalizing the shape of the test fingerprint with respect to the template. The method first determines possible matching minutiae pairs by means of comparing local neighborhoods of the minutiae. Next a thin-plate spline model is used to describe the non-linear distortions between the two sets of possible pairs. One of the fingerprints is deformed and registered according to the estimated model, and then the number of matching minutiae is counted. This method is able to deal with all possible non-linear distortions while using very tight bounding boxes. For deformed fingerprints, the algorithm gives considerably higher matching scores compared to rigid matching algorithms, while only taking 100 ms on a 1 GHz P-III machine.
{"title":"Elastic minutiae matching by means of thin-plate spline models","authors":"A. Bazen, S. H. Gerez","doi":"10.1109/ICPR.2002.1048471","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048471","url":null,"abstract":"This paper presents a novel minutiae matching method that deals with elastic distortions by normalizing the shape of the test fingerprint with respect to the template. The method first determines possible matching minutiae pairs by means of comparing local neighborhoods of the minutiae. Next a thin-plate spline model is used to describe the non-linear distortions between the two sets of possible pairs. One of the fingerprints is deformed and registered according to the estimated model, and then the number of matching minutiae is counted. This method is able to deal with all possible non-linear distortions while using very tight bounding boxes. For deformed fingerprints, the algorithm gives considerably higher matching scores compared to rigid matching algorithms, while only taking 100 ms on a 1 GHz P-III machine.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127003703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048452
X. Lladó, M. Petrou
The purpose of this work is to analyse what happens to the surface information when the image resolution is modified. We deduce how the same surface appears if seen from different distances. Using 4-source Colour Photometric Stereo, which provides the surface shape and colour information, a method for predicting how surface texture looks like when changing the distance of the camera is presented. We demonstrate this technique by classifying textured surfaces seen from different distances than the textured surfaces in the database.
{"title":"Classifying textures when seen from different distances","authors":"X. Lladó, M. Petrou","doi":"10.1109/ICPR.2002.1048452","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048452","url":null,"abstract":"The purpose of this work is to analyse what happens to the surface information when the image resolution is modified. We deduce how the same surface appears if seen from different distances. Using 4-source Colour Photometric Stereo, which provides the surface shape and colour information, a method for predicting how surface texture looks like when changing the distance of the camera is presented. We demonstrate this technique by classifying textured surfaces seen from different distances than the textured surfaces in the database.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123394093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048422
H. Ip, Angus K. Y. Cheng, William Y. F. Wong
Presents a compact and efficient shape signature that can be applied to shape retrieval from hand-drawn sketches. The two proposed invariant features that constitute our affine invariant signature are: (a) angle between spoke vectors, and (b) cumulated normalized area between consecutive spoke vectors. The signature can be computed efficiently from an input shape and supports fast similarity matching. The approach has also been extended for partial shape retrieval through a partial shape decomposition procedure.
{"title":"Affine invariant retrieval of shapes based on hand-drawn sketches","authors":"H. Ip, Angus K. Y. Cheng, William Y. F. Wong","doi":"10.1109/ICPR.2002.1048422","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048422","url":null,"abstract":"Presents a compact and efficient shape signature that can be applied to shape retrieval from hand-drawn sketches. The two proposed invariant features that constitute our affine invariant signature are: (a) angle between spoke vectors, and (b) cumulated normalized area between consecutive spoke vectors. The signature can be computed efficiently from an input shape and supports fast similarity matching. The approach has also been extended for partial shape retrieval through a partial shape decomposition procedure.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116214878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048465
Giovani Gómez
We used a data analysis approach for selecting colour components for skin detection. The criterion for this selection was to achieve a reasonable degree of generalisation and recognition, where skin points exhibit a well defined cluster. After evaluating each component of several colour models, we found that a mixture of components can cope well with such requirements. We list the top components, and from these we select one colour space: H-GY-Wr. A nearly convex area of this space contains 97% of all skin points, whilst it encompasses 5.16% of false positives. Even simple rules over this well-shaped space can achieve a high recognition rate and low overlap to non-skin points. This is a data analysis approach that will help to many skin detection systems.
{"title":"On selecting colour components for skin detection","authors":"Giovani Gómez","doi":"10.1109/ICPR.2002.1048465","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048465","url":null,"abstract":"We used a data analysis approach for selecting colour components for skin detection. The criterion for this selection was to achieve a reasonable degree of generalisation and recognition, where skin points exhibit a well defined cluster. After evaluating each component of several colour models, we found that a mixture of components can cope well with such requirements. We list the top components, and from these we select one colour space: H-GY-Wr. A nearly convex area of this space contains 97% of all skin points, whilst it encompasses 5.16% of false positives. Even simple rules over this well-shaped space can achieve a high recognition rate and low overlap to non-skin points. This is a data analysis approach that will help to many skin detection systems.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"166 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116308183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048436
Hiromi T. Tanaka, Kiyotaka Kushihama
Recently, there are growing needs for haptic exploration to estimate and extract physical object properties such as mass, friction, elasticity, relational constraints etc.. In this paper we propose a novel paradigm, we call haptic vision, which is a vision-based haptic exploration approach toward an automatic construction of reality-based virtual space simulator by augmenting active vision with active touch. We apply this technique to mass, and relational constraints estimation, and use these results to construct virtual object manipulation simulator. Experimental results show the feasibility and validity of the proposed approach.
{"title":"Haptic vision - vision-based haptic exploration","authors":"Hiromi T. Tanaka, Kiyotaka Kushihama","doi":"10.1109/ICPR.2002.1048436","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048436","url":null,"abstract":"Recently, there are growing needs for haptic exploration to estimate and extract physical object properties such as mass, friction, elasticity, relational constraints etc.. In this paper we propose a novel paradigm, we call haptic vision, which is a vision-based haptic exploration approach toward an automatic construction of reality-based virtual space simulator by augmenting active vision with active touch. We apply this technique to mass, and relational constraints estimation, and use these results to construct virtual object manipulation simulator. Experimental results show the feasibility and validity of the proposed approach.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124541674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048426
C. Lacombe, Pierre Kornprobst, G. Aubert, L. Blanc-Féraud
Over the past ten years, many phase unwrapping algorithms have been developed and formulated in a discrete setting. Here we propose a variational formulation to solve the problem. This continuous framework will allow us to impose some constraints on the smoothness of the solution and to implement them efficiently. This method is presented in the one dimensional case, and will serve as a basis for future developments in the real 2D case. Numerical schemes and results on a synthetic noisy wrapped signal are given.
{"title":"A variational approach to one dimensional phase unwrapping","authors":"C. Lacombe, Pierre Kornprobst, G. Aubert, L. Blanc-Féraud","doi":"10.1109/ICPR.2002.1048426","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048426","url":null,"abstract":"Over the past ten years, many phase unwrapping algorithms have been developed and formulated in a discrete setting. Here we propose a variational formulation to solve the problem. This continuous framework will allow us to impose some constraints on the smoothness of the solution and to implement them efficiently. This method is presented in the one dimensional case, and will serve as a basis for future developments in the real 2D case. Numerical schemes and results on a synthetic noisy wrapped signal are given.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124228271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-12-10DOI: 10.1109/ICPR.2002.1048235
S. Vishwanathan, M. Murty
We present a geometrically motivated algorithm for finding the Support Vectors of a given set of points. This algorithm is reminiscent of the DirectSVM algorithm, in the way it picks data points for inclusion in the Support Vector set, but it uses an optimization based approach to add them to the Support Vector set. This ensures that the algorithm scales to O(n/sup 3/) in the worst case and O(n|S|/sup 2/) in the average case where n is the total number of points in the data set and |S| is the number of Support Vectors. Further the memory requirements also scale as O(n/sup 2/) in the worst case and O(|S|/sup 2/) in the average case. The advantage of this algorithm is that it is more intuitive and performs extremely well when the number of Support Vectors is only a small fraction of the entire data set. It can also be used to calculate leave one out error based on the order in which data points were added to the Support Vector set. We also present results on real life data sets to validate our claims.
{"title":"Geometric SVM: a fast and intuitive SVM algorithm","authors":"S. Vishwanathan, M. Murty","doi":"10.1109/ICPR.2002.1048235","DOIUrl":"https://doi.org/10.1109/ICPR.2002.1048235","url":null,"abstract":"We present a geometrically motivated algorithm for finding the Support Vectors of a given set of points. This algorithm is reminiscent of the DirectSVM algorithm, in the way it picks data points for inclusion in the Support Vector set, but it uses an optimization based approach to add them to the Support Vector set. This ensures that the algorithm scales to O(n/sup 3/) in the worst case and O(n|S|/sup 2/) in the average case where n is the total number of points in the data set and |S| is the number of Support Vectors. Further the memory requirements also scale as O(n/sup 2/) in the worst case and O(|S|/sup 2/) in the average case. The advantage of this algorithm is that it is more intuitive and performs extremely well when the number of Support Vectors is only a small fraction of the entire data set. It can also be used to calculate leave one out error based on the order in which data points were added to the Support Vector set. We also present results on real life data sets to validate our claims.","PeriodicalId":159502,"journal":{"name":"Object recognition supported by user interaction for service robots","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125741220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}