Y. P. Atencio, Claudio Esperança, P. R. Cavalcanti, Antonio A. F. Oliveira
In this paper we describe a system for physical animation of rigid and deformable objects. These are represented as groups of particles linked by linear constraints, while a Verlet integrator is used for motion computation. Unlike traditional approaches, we accomplish physical simulation without explicitly computing orientation matrices, torques or inertia tensors. The main contribution of our work is related to the way collisions are handled by the system, which employs different approaches for deformable and rigid bodies. In particular, we show how collision detection using the GJK algorithm [9] and bounding sphere hierarchies can be combined with the projection based collision response technique described by Jakobsen [14].
{"title":"A Collision Detection and Response Scheme for Simplified Physically Based Animation","authors":"Y. P. Atencio, Claudio Esperança, P. R. Cavalcanti, Antonio A. F. Oliveira","doi":"10.1109/SIBGRAPI.2005.3","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.3","url":null,"abstract":"In this paper we describe a system for physical animation of rigid and deformable objects. These are represented as groups of particles linked by linear constraints, while a Verlet integrator is used for motion computation. Unlike traditional approaches, we accomplish physical simulation without explicitly computing orientation matrices, torques or inertia tensors. The main contribution of our work is related to the way collisions are handled by the system, which employs different approaches for deformable and rigid bodies. In particular, we show how collision detection using the GJK algorithm [9] and bounding sphere hierarchies can be combined with the projection based collision response technique described by Jakobsen [14].","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph sub-isomorphism is a very common approach to solving pattern search problems, but this is a NP-complete problem. This way, it is necessary to invest in research of approximate solutions, or in special cases of the problem. Planar subdivisions can be considered as a special case of graphs, because, in addition to nodes and edges, there is a more rigid topology in relation to the order of the edges, arising to the concept of face. This work presents a linear algorithm for pattern search in planar subdivisions. The presented algorithm is based on a hybrid approach between the dual and the region adjacency graph (RAG) to represent the patterns, saving any additional storage cost. Thus, the patterns are looked over the search subdivision, using a region growing algorithm.
{"title":"A Linear Algorithm for Exact Pattern Matching in Planar Subdivisions","authors":"Pedro Ribeiro de Andrade Neto, A. Guedes","doi":"10.1109/SIBGRAPI.2005.5","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.5","url":null,"abstract":"Graph sub-isomorphism is a very common approach to solving pattern search problems, but this is a NP-complete problem. This way, it is necessary to invest in research of approximate solutions, or in special cases of the problem. Planar subdivisions can be considered as a special case of graphs, because, in addition to nodes and edges, there is a more rigid topology in relation to the order of the edges, arising to the concept of face. This work presents a linear algorithm for pattern search in planar subdivisions. The presented algorithm is based on a hybrid approach between the dual and the region adjacency graph (RAG) to represent the patterns, saving any additional storage cost. Thus, the patterns are looked over the search subdivision, using a region growing algorithm.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124726361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.16
N. Hirata
Stacked generalization refers to any learning schema that consists of multiple levels of training. Level zero classifiers are those that depend solely on input data while classifiers at other levels may use the output of lower levels as the input. Stacked generalization can be used to address the difficulties related to the design of image operators defined on large windows. This paper describes a simple stacked generalization schema for the design of binary image operators and presents several application examples that show its effectiveness as a training schema.
{"title":"Binary Image Operator Design Based on Stacked Generalization","authors":"N. Hirata","doi":"10.1109/SIBGRAPI.2005.16","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.16","url":null,"abstract":"Stacked generalization refers to any learning schema that consists of multiple levels of training. Level zero classifiers are those that depend solely on input data while classifiers at other levels may use the output of lower levels as the input. Stacked generalization can be used to address the difficulties related to the design of image operators defined on large windows. This paper describes a simple stacked generalization schema for the design of binary image operators and presents several application examples that show its effectiveness as a training schema.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121224386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.29
Rodrigo Espinha, Waldemar Celes Filho
In this paper, we address the problem of the interactive volume rendering of unstructured meshes and propose a new hardware-based ray-casting algorithm using partial pre-integration. The proposed algorithm makes use of modern programmable graphics card and achieves rendering rates competitive with full pre-integration approaches (up to 2M tet/sec). This algorithm allows the interactive modification of the transfer function and results in high-quality images, since no artifact due to under-sampling the full numerical pre-integration exists. We also compare our approach with implementations of cell-projection algorithm and demonstrate that ray-casting can perform better than cell projection, because it eliminates the high costs involved in ordering and transferring data.
{"title":"High-Quality Hardware-Based Ray-Casting Volume Rendering Using Partial Pre-Integration","authors":"Rodrigo Espinha, Waldemar Celes Filho","doi":"10.1109/SIBGRAPI.2005.29","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.29","url":null,"abstract":"In this paper, we address the problem of the interactive volume rendering of unstructured meshes and propose a new hardware-based ray-casting algorithm using partial pre-integration. The proposed algorithm makes use of modern programmable graphics card and achieves rendering rates competitive with full pre-integration approaches (up to 2M tet/sec). This algorithm allows the interactive modification of the transfer function and results in high-quality images, since no artifact due to under-sampling the full numerical pre-integration exists. We also compare our approach with implementations of cell-projection algorithm and demonstrate that ray-casting can perform better than cell projection, because it eliminates the high costs involved in ordering and transferring data.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127141657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathematical morphology was originally conceived as a set theoretic approach for the processing of binary images. Approaches that extend classical binary morphology to gray-scale images are either based on umbras, thresholds, level sets, or fuzzy sets. Complete lattices form a general framework for all of these approaches. This paper discusses and compares several approaches to gray-scale mathematical morphology including the threshold, umbra, and level set approaches as well as fuzzy approaches.
{"title":"A Brief Account of the Relations between Gray-Scale Mathematical Morphologies","authors":"P. Sussner, M. E. Valle","doi":"10.1109/SIBGRAPI.2005.1","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.1","url":null,"abstract":"Mathematical morphology was originally conceived as a set theoretic approach for the processing of binary images. Approaches that extend classical binary morphology to gray-scale images are either based on umbras, thresholds, level sets, or fuzzy sets. Complete lattices form a general framework for all of these approaches. This paper discusses and compares several approaches to gray-scale mathematical morphology including the threshold, umbra, and level set approaches as well as fuzzy approaches.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125486122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.52
João Luis Prauchner, C. Freitas, J. Comba
Direct volume rendering techniques are used to visualize and explore large scalar volumes. Transfer functions (TFs) that assign opacity and color to scalar values are very important to display volume features, but their specification is not trivial or intuitive. This work presents an interactive, semi-automatic tool to assist the user in the generation of opacity and color TFs. We use the histogram approach proposed by Kindlmann and Durkin [9] to reduce the scope of candidate TFs presented to the user following the Design Galleries method [18]. The combination of these two solutions leads to a single interactive tool that allows the user to deal with different aspects of TF specification.
{"title":"Two-Level Interaction Approach for Transfer Function Specification","authors":"João Luis Prauchner, C. Freitas, J. Comba","doi":"10.1109/SIBGRAPI.2005.52","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.52","url":null,"abstract":"Direct volume rendering techniques are used to visualize and explore large scalar volumes. Transfer functions (TFs) that assign opacity and color to scalar values are very important to display volume features, but their specification is not trivial or intuitive. This work presents an interactive, semi-automatic tool to assist the user in the generation of opacity and color TFs. We use the histogram approach proposed by Kindlmann and Durkin [9] to reduce the scope of candidate TFs presented to the user following the Design Galleries method [18]. The combination of these two solutions leads to a single interactive tool that allows the user to deal with different aspects of TF specification.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130686697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.19
Fabricio A. Breve, M. Ponti, N. Mascarenhas
In this paper we present a set of experiments in order to recognize materials in multispectral images, which were obtained with a tomograph scanner. These images were classified by a neural network based classifier (Multilayer Perceptron) and classifier combining techniques (Bagging, Decision Templates and Dempster-Shafer) were investigated. We also present a performance comparison between the individual classifiers and the combiners. The results were evaluated by the estimated error (obtained using the Hold-Out technique) and the Kappa coefficient, and they showed performance stabilization.
{"title":"Combining Methods to Stabilize and Increase Performance of Neural Network-Based Classifiers","authors":"Fabricio A. Breve, M. Ponti, N. Mascarenhas","doi":"10.1109/SIBGRAPI.2005.19","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.19","url":null,"abstract":"In this paper we present a set of experiments in order to recognize materials in multispectral images, which were obtained with a tomograph scanner. These images were classified by a neural network based classifier (Multilayer Perceptron) and classifier combining techniques (Bagging, Decision Templates and Dempster-Shafer) were investigated. We also present a performance comparison between the individual classifiers and the combiners. The results were evaluated by the estimated error (obtained using the Hold-Out technique) and the Kappa coefficient, and they showed performance stabilization.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114579801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.18
Marcos Lage, T. Lewiner, H. Lopes, L. Velho
This work introduces a scalable topological data structure for manifold tetrahedral meshes called Compact Half-Face (CHF). It provides a high degree of scalability, since it is able to optimize the memory consumption/execution time ratio for different applications and data by using features of its different levels. An object-oriented API using class inheritance and virtual instantiation enables a unique interface for each function at any level. CHF requires very few memory, is simple to implement and easy to use, since it substitutes pointers by container of integers and basic bit-wise rules.
{"title":"CHF: A Scalable Topological Data Structure for Tetrahedral Meshes","authors":"Marcos Lage, T. Lewiner, H. Lopes, L. Velho","doi":"10.1109/SIBGRAPI.2005.18","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.18","url":null,"abstract":"This work introduces a scalable topological data structure for manifold tetrahedral meshes called Compact Half-Face (CHF). It provides a high degree of scalability, since it is able to optimize the memory consumption/execution time ratio for different applications and data by using features of its different levels. An object-oriented API using class inheritance and virtual instantiation enables a unique interface for each function at any level. CHF requires very few memory, is simple to implement and easy to use, since it substitutes pointers by container of integers and basic bit-wise rules.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114595981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.23
Wilson Gavião, J. Scharcanski
In hospital practice, several diagnostic hysteroscopy videos are produced daily. These videos are continuous (non-interrupted) video sequences, usually recorded in full. However, only a few segments of the recorded videos are relevant from the diagnosis/prognosis point of view, and need to be evaluated and referenced later. This paper proposes a new technique to identify clinically relevant segments in diagnostic hysteroscopy videos, producing a rich and compact video summary which supports fast video browsing. Also, our approach facilitates the selection of representative key-frames for reporting the video contents in the patient records. The proposed approach requires two stages. Initially, statistical techniques are used for selecting relevant video segments. Then, a post-processing stage merges adjacent video segments that are similar, reducing temporal video over-segmentation. Our preliminary experimental results indicate that our method produces compact video summaries containing a selection of critically relevant video segments. These experimental results were validated by specialists.
{"title":"Content-Based Diagnostic Hysteroscopy Summaries for Video Browsing","authors":"Wilson Gavião, J. Scharcanski","doi":"10.1109/SIBGRAPI.2005.23","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.23","url":null,"abstract":"In hospital practice, several diagnostic hysteroscopy videos are produced daily. These videos are continuous (non-interrupted) video sequences, usually recorded in full. However, only a few segments of the recorded videos are relevant from the diagnosis/prognosis point of view, and need to be evaluated and referenced later. This paper proposes a new technique to identify clinically relevant segments in diagnostic hysteroscopy videos, producing a rich and compact video summary which supports fast video browsing. Also, our approach facilitates the selection of representative key-frames for reporting the video contents in the patient records. The proposed approach requires two stages. Initially, statistical techniques are used for selecting relevant video segments. Then, a post-processing stage merges adjacent video segments that are similar, reducing temporal video over-segmentation. Our preliminary experimental results indicate that our method produces compact video summaries containing a selection of critically relevant video segments. These experimental results were validated by specialists.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133832943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.11
P. Longhurst, K. Debattista, R. Gillibrand, A. Chalmers
Images rendered using global illumination algorithms are considered amongst the most realistic in 3D computer graphics. However, this high fidelity comes at a significant computational expense. A major part of this cost arises from the sampling required to eliminate aliasing errors. These errors occur due to the discrete sampling of continuous geometry space inherent to these techniques. In this paper we present a fast analytic method for predicting in advance where antialiasing needs to be computed. This prediction is based on a rapid visualisation of the scene using a GPU, which is used to drive a selective renderer. We are able to significantly reduce the overall number of aniti-aliasing rays traced, producing an image that is perceptually indistinguishable from the high quality image at a much reduced computational cost.
{"title":"Analytic Antialiasing for Selective High Fidelity Rendering","authors":"P. Longhurst, K. Debattista, R. Gillibrand, A. Chalmers","doi":"10.1109/SIBGRAPI.2005.11","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.11","url":null,"abstract":"Images rendered using global illumination algorithms are considered amongst the most realistic in 3D computer graphics. However, this high fidelity comes at a significant computational expense. A major part of this cost arises from the sampling required to eliminate aliasing errors. These errors occur due to the discrete sampling of continuous geometry space inherent to these techniques. In this paper we present a fast analytic method for predicting in advance where antialiasing needs to be computed. This prediction is based on a rapid visualisation of the scene using a GPU, which is used to drive a selective renderer. We are able to significantly reduce the overall number of aniti-aliasing rays traced, producing an image that is perceptually indistinguishable from the high quality image at a much reduced computational cost.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133172335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}