Constraint-based geometric modeling is the standard modeling paradigm in current modern CAD systems. Generally, the user defines constraints on the geometric objects and a solver is applied to find a configuration of the geometry, which satisfies these constraints. Proper application of these constraints allows rapid modification of the geometry without loss of design intent. However, in current CAD systems, constraint solving for free-form geometric objects is generally limited. In particular, constraining global features such as limits on a curve's curvature values, are not supported. In this paper we present a general method, within the constraint-based framework, to construct global constraints on free-form curves. The method starts by defining sufficient conditions on the curves in terms of an inequality expression, unlike local constraints the global constraint expression will be defined for all the domain of the curves. We then transform the expression into a symbolic polynomial, whose coefficients are symbolic expressions of the original curves. In the final step, a set of inequality constraints is applied in terms of the symbolic coefficients. These inequality constraints enforce the positivity of the symbolic polynomial. The final inequality constraints are fed into the solver along with any other local constraints, which the user has provided on the curves. Therefore, the solution returned by the solver satisfies both the global constraints and any other local constraints the user supplies. We have implemented a prototype of our method using existing commercial constraint solvers. We present results on several problems, which are handled as global geometric constraints using our method.
{"title":"Solving global geometric constraints on free-form curves","authors":"Iddo Hanniel, Kirk Haller","doi":"10.1145/1629255.1629295","DOIUrl":"https://doi.org/10.1145/1629255.1629295","url":null,"abstract":"Constraint-based geometric modeling is the standard modeling paradigm in current modern CAD systems. Generally, the user defines constraints on the geometric objects and a solver is applied to find a configuration of the geometry, which satisfies these constraints. Proper application of these constraints allows rapid modification of the geometry without loss of design intent.\u0000 However, in current CAD systems, constraint solving for free-form geometric objects is generally limited. In particular, constraining global features such as limits on a curve's curvature values, are not supported.\u0000 In this paper we present a general method, within the constraint-based framework, to construct global constraints on free-form curves. The method starts by defining sufficient conditions on the curves in terms of an inequality expression, unlike local constraints the global constraint expression will be defined for all the domain of the curves. We then transform the expression into a symbolic polynomial, whose coefficients are symbolic expressions of the original curves. In the final step, a set of inequality constraints is applied in terms of the symbolic coefficients. These inequality constraints enforce the positivity of the symbolic polynomial.\u0000 The final inequality constraints are fed into the solver along with any other local constraints, which the user has provided on the curves. Therefore, the solution returned by the solver satisfies both the global constraints and any other local constraints the user supplies.\u0000 We have implemented a prototype of our method using existing commercial constraint solvers. We present results on several problems, which are handled as global geometric constraints using our method.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114917949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient technique to model sound propagation accurately in an arbitrary 3D scene by numerically integrating the wave equation. We show that by performing an offline modal analysis and using eigenvalues from a refined mesh, we can simulate sound propagation with reduced dispersion on a much coarser mesh, enabling accelerated computation. Since performing a modal analysis on the complete scene is usually not feasible, we present a domain decomposition approach to drastically shorten the pre-processing time. We introduce a simple, efficient and stable technique for handling the communication between the domain partitions. We validate the accuracy of our approach against cases with known analytical solutions. With our approach, we have observed up to an order of magnitude speedup compared to a standard finite-difference technique.
{"title":"Accelerated wave-based acoustics simulation","authors":"N. Raghuvanshi, Nico Galoppo, M. Lin","doi":"10.1145/1364901.1364916","DOIUrl":"https://doi.org/10.1145/1364901.1364916","url":null,"abstract":"We present an efficient technique to model sound propagation accurately in an arbitrary 3D scene by numerically integrating the wave equation. We show that by performing an offline modal analysis and using eigenvalues from a refined mesh, we can simulate sound propagation with reduced dispersion on a much coarser mesh, enabling accelerated computation. Since performing a modal analysis on the complete scene is usually not feasible, we present a domain decomposition approach to drastically shorten the pre-processing time. We introduce a simple, efficient and stable technique for handling the communication between the domain partitions. We validate the accuracy of our approach against cases with known analytical solutions. With our approach, we have observed up to an order of magnitude speedup compared to a standard finite-difference technique.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116970021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally it has been assumed "Design Specifications" are given a priori, and constitute the primary constraints on the process of search that is design. This view is under challenge in emergentist accounts of design, where it is seen that for well-understood functional needs, experienced designers are able to come up with good designs very quickly. It is hypothesized that this is possible because search is minimized using novel functional constraints that emerge from experience. These emergent aspects are difficult to model in a computational framework, and this work is a preliminary attempt in this direction. "Well-understood functions" are assumed to be quantifiable in terms of some performance metrics, which permits us to identify regions of high functional validity as emergent constraint regions in the design space. In addition, designers often change the design space itself, and negotiate the initial specs in many ways. We show that small changes in the design space may result in large changes in this mapping, which is why such emergent knowledge is limited to a specific embodiment. By introducing such measures into future solid modeling systems, it may reduce the human designer's search to the more ill-posed aspects of the problem.
{"title":"Negotiating design specifications: evolving functional constraints in mechanical assembly design","authors":"M. Dabbeeru, A. Mukerjee","doi":"10.1145/1364901.1364948","DOIUrl":"https://doi.org/10.1145/1364901.1364948","url":null,"abstract":"Traditionally it has been assumed \"Design Specifications\" are given a priori, and constitute the primary constraints on the process of search that is design. This view is under challenge in emergentist accounts of design, where it is seen that for well-understood functional needs, experienced designers are able to come up with good designs very quickly. It is hypothesized that this is possible because search is minimized using novel functional constraints that emerge from experience. These emergent aspects are difficult to model in a computational framework, and this work is a preliminary attempt in this direction. \"Well-understood functions\" are assumed to be quantifiable in terms of some performance metrics, which permits us to identify regions of high functional validity as emergent constraint regions in the design space. In addition, designers often change the design space itself, and negotiate the initial specs in many ways. We show that small changes in the design space may result in large changes in this mapping, which is why such emergent knowledge is limited to a specific embodiment. By introducing such measures into future solid modeling systems, it may reduce the human designer's search to the more ill-posed aspects of the problem.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125375849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of computing planar patterns for compression garments. In the garment industry, the compression garment has been more and more widely used to retain a shape of human body, where certain strain (or normal pressure) is designed at some places on the compression garment. Variant values and distribution of strain can only be generated by sewing different 2D patterns and warping them onto the body. We present a physical/geometric approach to compute 2D meshes that, when folded onto the 3D body, can generate a user-defined strain distribution through proper distortion. This is opposite to the widely studied mesh parameterization problem, whose objective is to minimize the distortion between the 2D and 3D meshes in angle, area or length.
{"title":"Pattern computation for compression garment","authors":"Charlie C. L. Wang, K. Tang","doi":"10.1145/1364901.1364929","DOIUrl":"https://doi.org/10.1145/1364901.1364929","url":null,"abstract":"This paper addresses the problem of computing planar patterns for compression garments. In the garment industry, the compression garment has been more and more widely used to retain a shape of human body, where certain strain (or normal pressure) is designed at some places on the compression garment. Variant values and distribution of strain can only be generated by sewing different 2D patterns and warping them onto the body. We present a physical/geometric approach to compute 2D meshes that, when folded onto the 3D body, can generate a user-defined strain distribution through proper distortion. This is opposite to the widely studied mesh parameterization problem, whose objective is to minimize the distortion between the 2D and 3D meshes in angle, area or length.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126753139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surface matching is fundamental to shape computing and various downstream applications. This paper develops a powerful pants decomposition framework for computing maps between surfaces with arbitrary topologies. We first conduct pants decomposition on both surfaces to segment them into consistent sets of pants patches (here a pants patch is intuitively defined as a genus-zero surface with three boundaries). Then we compose global mapping between two surfaces by harmonic maps of corresponding patches. This framework has several key advantages over other state-of-the-art techniques. First, the surface decomposition is automatic and general. It can automatically construct mappings for surfaces with same but complicated topology, and the result is guaranteed to be one-to-one continuous. Second, the mapping framework is very flexible and powerful. Not only topology and geometry, but also the semantics can be easily integrated into this framework with a little user involvement. Specifically, it provides an easy and intuitive human-computer interaction mechanism so that mapping between surfaces with different topologies, or with additional point/curve constraints, can be properly obtained within our framework. Compared with previous user-guided, piecewise surface mapping techniques, our new method is more intuitive, less labor-intensive, and requires no user's expertise in computing complicated surface map between arbitrary shapes. We conduct various experiments to demonstrate its modeling potential and effectiveness.
{"title":"Surface matching using consistent pants decomposition","authors":"Xin Li, X. Gu, Hong Qin","doi":"10.1145/1364901.1364920","DOIUrl":"https://doi.org/10.1145/1364901.1364920","url":null,"abstract":"Surface matching is fundamental to shape computing and various downstream applications. This paper develops a powerful pants decomposition framework for computing maps between surfaces with arbitrary topologies. We first conduct pants decomposition on both surfaces to segment them into consistent sets of pants patches (here a pants patch is intuitively defined as a genus-zero surface with three boundaries). Then we compose global mapping between two surfaces by harmonic maps of corresponding patches. This framework has several key advantages over other state-of-the-art techniques. First, the surface decomposition is automatic and general. It can automatically construct mappings for surfaces with same but complicated topology, and the result is guaranteed to be one-to-one continuous. Second, the mapping framework is very flexible and powerful. Not only topology and geometry, but also the semantics can be easily integrated into this framework with a little user involvement. Specifically, it provides an easy and intuitive human-computer interaction mechanism so that mapping between surfaces with different topologies, or with additional point/curve constraints, can be properly obtained within our framework. Compared with previous user-guided, piecewise surface mapping techniques, our new method is more intuitive, less labor-intensive, and requires no user's expertise in computing complicated surface map between arbitrary shapes. We conduct various experiments to demonstrate its modeling potential and effectiveness.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115518666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyses the probability that randomly deployed sensor nodes triangulate any point within the target area. Its major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance (2 units) from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another. The expected number of un-triangulated coverage holes, i.e. uncovered areas which cannot be triangulated by adjacent nodes, in a finite target area is derived. Simulation results corroborate the probabilistic analysis with low error, for any node density. These results will find applications in triangulation-based or trilateration-based pointing analysis, or any computational geometry application within the context of triangulation.
{"title":"Probabilistic model of triangulation","authors":"Xiaoyun Li, D. Hunter","doi":"10.1145/1364901.1364943","DOIUrl":"https://doi.org/10.1145/1364901.1364943","url":null,"abstract":"This paper analyses the probability that randomly deployed sensor nodes triangulate any point within the target area. Its major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance (2 units) from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another. The expected number of un-triangulated coverage holes, i.e. uncovered areas which cannot be triangulated by adjacent nodes, in a finite target area is derived. Simulation results corroborate the probabilistic analysis with low error, for any node density. These results will find applications in triangulation-based or trilateration-based pointing analysis, or any computational geometry application within the context of triangulation.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128252678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sphere packing arrangements are frequently found in nature, exhibiting efficient space-filling and energy minimization properties. Close sphere packings provide a tight, uniform, and highly symmetric spatial sampling at a single resolution. We introduce the Multiresolution Sphere Packing Tree (MSP-tree): a hierarchical spatial data structure based on sphere packing arrangements suitable for 3D space representation and selective refinement. Compared to the commonly used octree, MSP-tree offers three advantages: a lower fanout (a factor of four compared to eight), denser packing (about 24% denser), and persistence (sphere centers at coarse resolutions persist at finer resolutions). We present MSP-tree both as a region-based approach that describes the refinement mechanism succintly and intuitively, and as a lattice-based approach better suited for implementation. The MSP-tree offers a robust, highly symmetric tessellation of 3D space with favorable image processing properties.
{"title":"Multiresolution sphere packing tree: a hierarchical multiresolution 3D data structure","authors":"Jiro Inoue, A. J. Stewart","doi":"10.1145/1364901.1364954","DOIUrl":"https://doi.org/10.1145/1364901.1364954","url":null,"abstract":"Sphere packing arrangements are frequently found in nature, exhibiting efficient space-filling and energy minimization properties. Close sphere packings provide a tight, uniform, and highly symmetric spatial sampling at a single resolution. We introduce the Multiresolution Sphere Packing Tree (MSP-tree): a hierarchical spatial data structure based on sphere packing arrangements suitable for 3D space representation and selective refinement. Compared to the commonly used octree, MSP-tree offers three advantages: a lower fanout (a factor of four compared to eight), denser packing (about 24% denser), and persistence (sphere centers at coarse resolutions persist at finer resolutions). We present MSP-tree both as a region-based approach that describes the refinement mechanism succintly and intuitively, and as a lattice-based approach better suited for implementation. The MSP-tree offers a robust, highly symmetric tessellation of 3D space with favorable image processing properties.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132623312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Polycube T-spline has been formulated elegantly that can unify T-splines and manifold splines to define a new class of shape representations for surfaces of arbitrary topology by using polycube map as its parametric domain. In essense, The data fitting quality using polycube T-splines hinges upon the construction of underlying polycube maps. Yet, existing methods for polycube map construction exhibit some disadvantages. For example, existing approaches for polycube map construction either require projection of points from a 3D surface to its polycube approximation, which is therefore very difficult to handle the cases when two shapes differ significantly; or compute the map by conformally deforming the surfaces and polycubes to the common canonical domain and then construct the map using function composition, which is challenging to control the location of singularities and makes it hard for the data-fitting and hole-filling processes later on. This paper proposes a novel framework of user-controllable polycube maps, which can overcome disadvantages of the conventional methods and is much more efficient and accurate. The current approach allows users to directly select the corner points of the polycubes on the original 3D surfaces, then construct the polycube maps by using the new computational tool of discrete Euclidean Ricci flow. We develop algorithms for computing such polycube maps, and show that the resulting user-controllable polycube map serves as an ideal parametric domain for constructing spline surfaces and other applications. The location of singularities can be interactively placed where no important geometric features exist. Experimental results demonstrate that the proposed polycube maps introduce lower area distortion and retain small angle distortion as well, and subsequently make the entire hole-filling process much easier to accomplish.
{"title":"User-controllable polycube map for manifold spline construction","authors":"Hongyu Wang, Miao Jin, Ying He, X. Gu, Hong Qin","doi":"10.1145/1364901.1364958","DOIUrl":"https://doi.org/10.1145/1364901.1364958","url":null,"abstract":"Polycube T-spline has been formulated elegantly that can unify T-splines and manifold splines to define a new class of shape representations for surfaces of arbitrary topology by using polycube map as its parametric domain. In essense, The data fitting quality using polycube T-splines hinges upon the construction of underlying polycube maps. Yet, existing methods for polycube map construction exhibit some disadvantages. For example, existing approaches for polycube map construction either require projection of points from a 3D surface to its polycube approximation, which is therefore very difficult to handle the cases when two shapes differ significantly; or compute the map by conformally deforming the surfaces and polycubes to the common canonical domain and then construct the map using function composition, which is challenging to control the location of singularities and makes it hard for the data-fitting and hole-filling processes later on.\u0000 This paper proposes a novel framework of user-controllable polycube maps, which can overcome disadvantages of the conventional methods and is much more efficient and accurate. The current approach allows users to directly select the corner points of the polycubes on the original 3D surfaces, then construct the polycube maps by using the new computational tool of discrete Euclidean Ricci flow. We develop algorithms for computing such polycube maps, and show that the resulting user-controllable polycube map serves as an ideal parametric domain for constructing spline surfaces and other applications. The location of singularities can be interactively placed where no important geometric features exist. Experimental results demonstrate that the proposed polycube maps introduce lower area distortion and retain small angle distortion as well, and subsequently make the entire hole-filling process much easier to accomplish.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133608172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a computational method for converting a tetrahedral mesh to a prism-tetrahedral hybrid mesh for improved solution accuracy and computational efficiency of finite element analysis. The proposed method inserts layers of prism elements and deletes tetrahedral elements in sweepable sub-domains, in which cross-sections remain topologically identical and geometrically similar along a certain sweeping path. The total number of finite elements is reduced because roughly three tetrahedral elements are converted to one prism element. The solution accuracy of the finite element analysis improves since a prism element yields a more accurate solution than a tetrahedral element. Only previously known method for creating such a prism-tetrahedral mesh was to manually decompose a target volume into sweepable and non-sweepable sub-volumes and mesh each sub-volume separately. The proposed method starts from a cross-section of a tetrahedral mesh and replaces the tetrahedral elements with layers of prism elements until prescribed quality criteria can no longer be satisfied. The method applies a sequence of edge-collapse, local-transformation, and smoothing operations to remove or displace nodes located within the volume to be replaced with a layer of prism elements. Series of computational fluid dynamics simulations and structural analyses have been conducted, and the results verified a better performance of prismtetrahedral hybrid mesh in finite element simulations.
{"title":"Converting a tetrahedral mesh to a prism-tetrahedral hybrid mesh for FEM accuracy and efficiency","authors":"Soji Yamakawa, K. Shimada","doi":"10.1145/1364901.1364941","DOIUrl":"https://doi.org/10.1145/1364901.1364941","url":null,"abstract":"This paper presents a computational method for converting a tetrahedral mesh to a prism-tetrahedral hybrid mesh for improved solution accuracy and computational efficiency of finite element analysis. The proposed method inserts layers of prism elements and deletes tetrahedral elements in sweepable sub-domains, in which cross-sections remain topologically identical and geometrically similar along a certain sweeping path. The total number of finite elements is reduced because roughly three tetrahedral elements are converted to one prism element. The solution accuracy of the finite element analysis improves since a prism element yields a more accurate solution than a tetrahedral element. Only previously known method for creating such a prism-tetrahedral mesh was to manually decompose a target volume into sweepable and non-sweepable sub-volumes and mesh each sub-volume separately. The proposed method starts from a cross-section of a tetrahedral mesh and replaces the tetrahedral elements with layers of prism elements until prescribed quality criteria can no longer be satisfied. The method applies a sequence of edge-collapse, local-transformation, and smoothing operations to remove or displace nodes located within the volume to be replaced with a layer of prism elements. Series of computational fluid dynamics simulations and structural analyses have been conducted, and the results verified a better performance of prismtetrahedral hybrid mesh in finite element simulations.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117243323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Tomography (CT) and in particular super fast, 64 and 256 detector CT has rapidly advanced over recent years, such that high resolution cardiac imaging has become a reality. In this paper, we briefly introduce a framework that we have built to construct three dimensional (3D) finite-element and boundary element mesh models of the human heart directly from high resolution CT imaging data. Although, the overall IMAGING-MODELING framework consists of image processing, geometry processing and meshing algorithms, our main focus in this paper will revolve around three key geometry processing steps which are parts of the so-called IMAGING-MODELING framework. These three steps are geometry cleanup or CURATION, anatomy guided annotation or SEGMENTATION and construction of GENERALIZED OFFSET SURFACE. These three algorithms, due to the very nature of the computation involved, can also be thought as parts of a more generalized modeling technique, namely geometric modeling with distance function. As part of the results presented in the paper, we will show that our algorithms are robust enough to effectively deal with the challenges posed by the real-world patient CT data collected from our radiologist collaborators.
{"title":"Multi-component heart reconstruction from volumetric imaging","authors":"C. Bajaj, S. Goswami","doi":"10.1145/1364901.1364928","DOIUrl":"https://doi.org/10.1145/1364901.1364928","url":null,"abstract":"Computer Tomography (CT) and in particular super fast, 64 and 256 detector CT has rapidly advanced over recent years, such that high resolution cardiac imaging has become a reality. In this paper, we briefly introduce a framework that we have built to construct three dimensional (3D) finite-element and boundary element mesh models of the human heart directly from high resolution CT imaging data. Although, the overall IMAGING-MODELING framework consists of image processing, geometry processing and meshing algorithms, our main focus in this paper will revolve around three key geometry processing steps which are parts of the so-called IMAGING-MODELING framework. These three steps are geometry cleanup or CURATION, anatomy guided annotation or SEGMENTATION and construction of GENERALIZED OFFSET SURFACE. These three algorithms, due to the very nature of the computation involved, can also be thought as parts of a more generalized modeling technique, namely geometric modeling with distance function. As part of the results presented in the paper, we will show that our algorithms are robust enough to effectively deal with the challenges posed by the real-world patient CT data collected from our radiologist collaborators.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}