Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.37
Leandro Tonietto, M. Walter, C. Jung
Patch-based texture synthesis builds a texture by joining together blocks of pixels — patches — of the original sample. Usually the best patches are selected among all possible using a L2 norm on the RGB or grayscale pixel values of boundary zones. The L2 metric provides the raw pixel-to-pixel difference, disregarding relevant image structures — such as edges — that are relevant in the human visual system and therefore on synthesis of new textures. We present a wavelet-based approach for selecting patches for patch-based texture synthesis. For each possible patch we compute the wavelet coefficients for the boundary region and pick the patch with the smallest error computed from the wavelet coefficients. We show that the use of wavelets as metric for selection of the best patches improves texture synthesis for samples which previous work fails, mainly textures with prominent aligned features.
{"title":"Patch-Based Texture Synthesis Using Wavelets","authors":"Leandro Tonietto, M. Walter, C. Jung","doi":"10.1109/SIBGRAPI.2005.37","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.37","url":null,"abstract":"Patch-based texture synthesis builds a texture by joining together blocks of pixels — patches — of the original sample. Usually the best patches are selected among all possible using a L2 norm on the RGB or grayscale pixel values of boundary zones. The L2 metric provides the raw pixel-to-pixel difference, disregarding relevant image structures — such as edges — that are relevant in the human visual system and therefore on synthesis of new textures. We present a wavelet-based approach for selecting patches for patch-based texture synthesis. For each possible patch we compute the wavelet coefficients for the boundary region and pick the patch with the smallest error computed from the wavelet coefficients. We show that the use of wavelets as metric for selection of the best patches improves texture synthesis for samples which previous work fails, mainly textures with prominent aligned features.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131882766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.10
C. Jung, C. Kelber
In this paper, we propose a new model for lane tracking and curve detection. We use a linear-parabolic model for each lane boundary, and apply constraints to link both lane boundaries based on the expected geometry of the road. The parabolic part of the model, which fits the far field, is then used to analyze the geometry of the road ahead (straight, right curve or left curve), with applications in driver’s assistance systems and road inspection. Experimental results indicate that introduced geometric constraints result in a more consistent fit if compared to the individual fitting of each lane boundary, and that the parabolic part of the model can be effectively used to keep the driver informed about the geometry of the road in front of him/her.
{"title":"An Improved Linear-Parabolic Model for Lane Following and Curve Detection","authors":"C. Jung, C. Kelber","doi":"10.1109/SIBGRAPI.2005.10","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.10","url":null,"abstract":"In this paper, we propose a new model for lane tracking and curve detection. We use a linear-parabolic model for each lane boundary, and apply constraints to link both lane boundaries based on the expected geometry of the road. The parabolic part of the model, which fits the far field, is then used to analyze the geometry of the road ahead (straight, right curve or left curve), with applications in driver’s assistance systems and road inspection. Experimental results indicate that introduced geometric constraints result in a more consistent fit if compared to the individual fitting of each lane boundary, and that the parabolic part of the model can be effectively used to keep the driver informed about the geometry of the road in front of him/her.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121343432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digital image segmentation challenge has demanded the development of a plethora of methods and approaches. A quite simple approach, the thresholding, has still been intensively applied mainly for real-time vision applications. However, the threshold criteria often depend on entropic or statistical image features. This work searches a relationship between these features and subjective human threshold decisions. Then, an image thresholding model based on these subjective decisions and global statistical features was developed by training a Radial Basis Functions Network (RBFN). This work also compares the automatic thresholding methods to the human responses. Furthermore, the RBFN-modeled answers were compared to the automatic thresholding. The results show that entropic-based method was closer to RBFN-modeled thresholding than variance-based method. It was also found that another automatic method which combines global and local criteria presented higher correlation with human responses.
{"title":"A RBFN Perceptive Model for Image Thresholding","authors":"Fabricio M. Lopes, Luís Augusto Consularo","doi":"10.1109/SIBGRAPI.2005.8","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.8","url":null,"abstract":"The digital image segmentation challenge has demanded the development of a plethora of methods and approaches. A quite simple approach, the thresholding, has still been intensively applied mainly for real-time vision applications. However, the threshold criteria often depend on entropic or statistical image features. This work searches a relationship between these features and subjective human threshold decisions. Then, an image thresholding model based on these subjective decisions and global statistical features was developed by training a Radial Basis Functions Network (RBFN). This work also compares the automatic thresholding methods to the human responses. Furthermore, the RBFN-modeled answers were compared to the automatic thresholding. The results show that entropic-based method was closer to RBFN-modeled thresholding than variance-based method. It was also found that another automatic method which combines global and local criteria presented higher correlation with human responses.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131125949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shading for cel animation based on images is a recent research topic in computer-assisted animation. This paper proposes an image-based shading pipeline to give a 3D appearance to a 2D character by inspecting the hand-drawn image directly. The proposed method estimates normal vectors on the character’s outline and interpolates them over the remaining image. The method does not limit the animator’s creative process and requires minimal user intervention. The resulting shading pipeline can be easily applied to photorealistic and non-photorealistic 2D cel animation. In the proposed method, the animator can easily simulate environment reflections on the surface of 2D reflecting objects. As far as the authors are concerned, the proposed technique is the only one in the literature that is genuinely an image-based method for 2D animation.
{"title":"An Image-Based Shading Pipeline for 2D Animation","authors":"Hedlena Bezerra, B. Feijó, L. Velho","doi":"10.1109/SIBGRAPI.2005.9","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.9","url":null,"abstract":"Shading for cel animation based on images is a recent research topic in computer-assisted animation. This paper proposes an image-based shading pipeline to give a 3D appearance to a 2D character by inspecting the hand-drawn image directly. The proposed method estimates normal vectors on the character’s outline and interpolates them over the remaining image. The method does not limit the animator’s creative process and requires minimal user intervention. The resulting shading pipeline can be easily applied to photorealistic and non-photorealistic 2D cel animation. In the proposed method, the animator can easily simulate environment reflections on the surface of 2D reflecting objects. As far as the authors are concerned, the proposed technique is the only one in the literature that is genuinely an image-based method for 2D animation.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130197808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a sketching interface for modeling shapes defined by large sets of points. Our system supports powerful modeling operations that are applied directly to the points defining the surface. These operations are based on sketch input to allow directly creating objects using simple strokes. Objects may be edited by either by boundary over-sketching, or by cutting, relief drawing on surfaces, merging and cloning. By combining these operations we can create complex shapes, including objects with sharp features. Our work uses the Multi-level Partition of Unity Implicits (MPU) technique to convert point clouds into implicit surfaces. Furthermore, we have devised a fast adaptive incremental polygonization algorithm which takes advantage of the MPU structure. This makes local re-polygonization possible and allows real-time modifications to large point sets since it avoids re-calculating the whole polygonal representation from scratch after each modification.
{"title":"A Calligraphic Interface for Interactive Free-Form Modeling with Large Datasets","authors":"B. Araújo, J. Jorge","doi":"10.1109/SIBGRAPI.2005.2","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.2","url":null,"abstract":"We present a sketching interface for modeling shapes defined by large sets of points. Our system supports powerful modeling operations that are applied directly to the points defining the surface. These operations are based on sketch input to allow directly creating objects using simple strokes. Objects may be edited by either by boundary over-sketching, or by cutting, relief drawing on surfaces, merging and cloning. By combining these operations we can create complex shapes, including objects with sharp features. Our work uses the Multi-level Partition of Unity Implicits (MPU) technique to convert point clouds into implicit surfaces. Furthermore, we have devised a fast adaptive incremental polygonization algorithm which takes advantage of the MPU structure. This makes local re-polygonization possible and allows real-time modifications to large point sets since it avoids re-calculating the whole polygonal representation from scratch after each modification.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115267057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-09-27DOI: 10.1109/SIBGRAPI.2005.13
Y. Zana, R. M. C. Junior, Regis de A. Barbosa
We present an automatic face verification system inspired by known properties of biological systems. In the proposed algorithm the whole image is converted from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT). Using the whole image is compared to the case where only face image regions (local analysis) are considered. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images, and a Pseudo-Fisher discriminator is built. Verification test results on the FERET database showed that the local-based algorithm outperforms the global-FBT version. The local-FBT algorithm performed as state-of-the-art methods under different testing conditions, indicating that the proposed system is highly robust for expression, age, and illumination variations. We also evaluated the performance of the proposed system under strong occlusion conditions and found that it is highly robust for up to 50% of face occlusion. Finally, we automated completely the verification system by implementing face and eye detection algorithms. Under this condition, the local approach was only slightly superior to the global approach.
{"title":"Automatic Face Recognition System Based on Local Fourier-Bessel Feature","authors":"Y. Zana, R. M. C. Junior, Regis de A. Barbosa","doi":"10.1109/SIBGRAPI.2005.13","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.13","url":null,"abstract":"We present an automatic face verification system inspired by known properties of biological systems. In the proposed algorithm the whole image is converted from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT). Using the whole image is compared to the case where only face image regions (local analysis) are considered. The resulting representations are embedded in a dissimilarity space, where each image is represented by its distance to all the other images, and a Pseudo-Fisher discriminator is built. Verification test results on the FERET database showed that the local-based algorithm outperforms the global-FBT version. The local-FBT algorithm performed as state-of-the-art methods under different testing conditions, indicating that the proposed system is highly robust for expression, age, and illumination variations. We also evaluated the performance of the proposed system under strong occlusion conditions and found that it is highly robust for up to 50% of face occlusion. Finally, we automated completely the verification system by implementing face and eye detection algorithms. Under this condition, the local approach was only slightly superior to the global approach.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114707304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}