Constraint-based geometric modeling is the standard modeling paradigm in current modern CAD systems. Generally, the user defines constraints on the geometric objects and a solver is applied to find a configuration of the geometry, which satisfies these constraints. Proper application of these constraints allows rapid modification of the geometry without loss of design intent. However, in current CAD systems, constraint solving for free-form geometric objects is generally limited. In particular, constraining global features such as limits on a curve's curvature values, are not supported. In this paper we present a general method, within the constraint-based framework, to construct global constraints on free-form curves. The method starts by defining sufficient conditions on the curves in terms of an inequality expression, unlike local constraints the global constraint expression will be defined for all the domain of the curves. We then transform the expression into a symbolic polynomial, whose coefficients are symbolic expressions of the original curves. In the final step, a set of inequality constraints is applied in terms of the symbolic coefficients. These inequality constraints enforce the positivity of the symbolic polynomial. The final inequality constraints are fed into the solver along with any other local constraints, which the user has provided on the curves. Therefore, the solution returned by the solver satisfies both the global constraints and any other local constraints the user supplies. We have implemented a prototype of our method using existing commercial constraint solvers. We present results on several problems, which are handled as global geometric constraints using our method.
{"title":"Solving global geometric constraints on free-form curves","authors":"Iddo Hanniel, Kirk Haller","doi":"10.1145/1629255.1629295","DOIUrl":"https://doi.org/10.1145/1629255.1629295","url":null,"abstract":"Constraint-based geometric modeling is the standard modeling paradigm in current modern CAD systems. Generally, the user defines constraints on the geometric objects and a solver is applied to find a configuration of the geometry, which satisfies these constraints. Proper application of these constraints allows rapid modification of the geometry without loss of design intent.\u0000 However, in current CAD systems, constraint solving for free-form geometric objects is generally limited. In particular, constraining global features such as limits on a curve's curvature values, are not supported.\u0000 In this paper we present a general method, within the constraint-based framework, to construct global constraints on free-form curves. The method starts by defining sufficient conditions on the curves in terms of an inequality expression, unlike local constraints the global constraint expression will be defined for all the domain of the curves. We then transform the expression into a symbolic polynomial, whose coefficients are symbolic expressions of the original curves. In the final step, a set of inequality constraints is applied in terms of the symbolic coefficients. These inequality constraints enforce the positivity of the symbolic polynomial.\u0000 The final inequality constraints are fed into the solver along with any other local constraints, which the user has provided on the curves. Therefore, the solution returned by the solver satisfies both the global constraints and any other local constraints the user supplies.\u0000 We have implemented a prototype of our method using existing commercial constraint solvers. We present results on several problems, which are handled as global geometric constraints using our method.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114917949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient technique to model sound propagation accurately in an arbitrary 3D scene by numerically integrating the wave equation. We show that by performing an offline modal analysis and using eigenvalues from a refined mesh, we can simulate sound propagation with reduced dispersion on a much coarser mesh, enabling accelerated computation. Since performing a modal analysis on the complete scene is usually not feasible, we present a domain decomposition approach to drastically shorten the pre-processing time. We introduce a simple, efficient and stable technique for handling the communication between the domain partitions. We validate the accuracy of our approach against cases with known analytical solutions. With our approach, we have observed up to an order of magnitude speedup compared to a standard finite-difference technique.
{"title":"Accelerated wave-based acoustics simulation","authors":"N. Raghuvanshi, Nico Galoppo, M. Lin","doi":"10.1145/1364901.1364916","DOIUrl":"https://doi.org/10.1145/1364901.1364916","url":null,"abstract":"We present an efficient technique to model sound propagation accurately in an arbitrary 3D scene by numerically integrating the wave equation. We show that by performing an offline modal analysis and using eigenvalues from a refined mesh, we can simulate sound propagation with reduced dispersion on a much coarser mesh, enabling accelerated computation. Since performing a modal analysis on the complete scene is usually not feasible, we present a domain decomposition approach to drastically shorten the pre-processing time. We introduce a simple, efficient and stable technique for handling the communication between the domain partitions. We validate the accuracy of our approach against cases with known analytical solutions. With our approach, we have observed up to an order of magnitude speedup compared to a standard finite-difference technique.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116970021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally it has been assumed "Design Specifications" are given a priori, and constitute the primary constraints on the process of search that is design. This view is under challenge in emergentist accounts of design, where it is seen that for well-understood functional needs, experienced designers are able to come up with good designs very quickly. It is hypothesized that this is possible because search is minimized using novel functional constraints that emerge from experience. These emergent aspects are difficult to model in a computational framework, and this work is a preliminary attempt in this direction. "Well-understood functions" are assumed to be quantifiable in terms of some performance metrics, which permits us to identify regions of high functional validity as emergent constraint regions in the design space. In addition, designers often change the design space itself, and negotiate the initial specs in many ways. We show that small changes in the design space may result in large changes in this mapping, which is why such emergent knowledge is limited to a specific embodiment. By introducing such measures into future solid modeling systems, it may reduce the human designer's search to the more ill-posed aspects of the problem.
{"title":"Negotiating design specifications: evolving functional constraints in mechanical assembly design","authors":"M. Dabbeeru, A. Mukerjee","doi":"10.1145/1364901.1364948","DOIUrl":"https://doi.org/10.1145/1364901.1364948","url":null,"abstract":"Traditionally it has been assumed \"Design Specifications\" are given a priori, and constitute the primary constraints on the process of search that is design. This view is under challenge in emergentist accounts of design, where it is seen that for well-understood functional needs, experienced designers are able to come up with good designs very quickly. It is hypothesized that this is possible because search is minimized using novel functional constraints that emerge from experience. These emergent aspects are difficult to model in a computational framework, and this work is a preliminary attempt in this direction. \"Well-understood functions\" are assumed to be quantifiable in terms of some performance metrics, which permits us to identify regions of high functional validity as emergent constraint regions in the design space. In addition, designers often change the design space itself, and negotiate the initial specs in many ways. We show that small changes in the design space may result in large changes in this mapping, which is why such emergent knowledge is limited to a specific embodiment. By introducing such measures into future solid modeling systems, it may reduce the human designer's search to the more ill-posed aspects of the problem.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125375849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses the problem of computing planar patterns for compression garments. In the garment industry, the compression garment has been more and more widely used to retain a shape of human body, where certain strain (or normal pressure) is designed at some places on the compression garment. Variant values and distribution of strain can only be generated by sewing different 2D patterns and warping them onto the body. We present a physical/geometric approach to compute 2D meshes that, when folded onto the 3D body, can generate a user-defined strain distribution through proper distortion. This is opposite to the widely studied mesh parameterization problem, whose objective is to minimize the distortion between the 2D and 3D meshes in angle, area or length.
{"title":"Pattern computation for compression garment","authors":"Charlie C. L. Wang, K. Tang","doi":"10.1145/1364901.1364929","DOIUrl":"https://doi.org/10.1145/1364901.1364929","url":null,"abstract":"This paper addresses the problem of computing planar patterns for compression garments. In the garment industry, the compression garment has been more and more widely used to retain a shape of human body, where certain strain (or normal pressure) is designed at some places on the compression garment. Variant values and distribution of strain can only be generated by sewing different 2D patterns and warping them onto the body. We present a physical/geometric approach to compute 2D meshes that, when folded onto the 3D body, can generate a user-defined strain distribution through proper distortion. This is opposite to the widely studied mesh parameterization problem, whose objective is to minimize the distortion between the 2D and 3D meshes in angle, area or length.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126753139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Surface matching is fundamental to shape computing and various downstream applications. This paper develops a powerful pants decomposition framework for computing maps between surfaces with arbitrary topologies. We first conduct pants decomposition on both surfaces to segment them into consistent sets of pants patches (here a pants patch is intuitively defined as a genus-zero surface with three boundaries). Then we compose global mapping between two surfaces by harmonic maps of corresponding patches. This framework has several key advantages over other state-of-the-art techniques. First, the surface decomposition is automatic and general. It can automatically construct mappings for surfaces with same but complicated topology, and the result is guaranteed to be one-to-one continuous. Second, the mapping framework is very flexible and powerful. Not only topology and geometry, but also the semantics can be easily integrated into this framework with a little user involvement. Specifically, it provides an easy and intuitive human-computer interaction mechanism so that mapping between surfaces with different topologies, or with additional point/curve constraints, can be properly obtained within our framework. Compared with previous user-guided, piecewise surface mapping techniques, our new method is more intuitive, less labor-intensive, and requires no user's expertise in computing complicated surface map between arbitrary shapes. We conduct various experiments to demonstrate its modeling potential and effectiveness.
{"title":"Surface matching using consistent pants decomposition","authors":"Xin Li, X. Gu, Hong Qin","doi":"10.1145/1364901.1364920","DOIUrl":"https://doi.org/10.1145/1364901.1364920","url":null,"abstract":"Surface matching is fundamental to shape computing and various downstream applications. This paper develops a powerful pants decomposition framework for computing maps between surfaces with arbitrary topologies. We first conduct pants decomposition on both surfaces to segment them into consistent sets of pants patches (here a pants patch is intuitively defined as a genus-zero surface with three boundaries). Then we compose global mapping between two surfaces by harmonic maps of corresponding patches. This framework has several key advantages over other state-of-the-art techniques. First, the surface decomposition is automatic and general. It can automatically construct mappings for surfaces with same but complicated topology, and the result is guaranteed to be one-to-one continuous. Second, the mapping framework is very flexible and powerful. Not only topology and geometry, but also the semantics can be easily integrated into this framework with a little user involvement. Specifically, it provides an easy and intuitive human-computer interaction mechanism so that mapping between surfaces with different topologies, or with additional point/curve constraints, can be properly obtained within our framework. Compared with previous user-guided, piecewise surface mapping techniques, our new method is more intuitive, less labor-intensive, and requires no user's expertise in computing complicated surface map between arbitrary shapes. We conduct various experiments to demonstrate its modeling potential and effectiveness.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115518666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyses the probability that randomly deployed sensor nodes triangulate any point within the target area. Its major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance (2 units) from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another. The expected number of un-triangulated coverage holes, i.e. uncovered areas which cannot be triangulated by adjacent nodes, in a finite target area is derived. Simulation results corroborate the probabilistic analysis with low error, for any node density. These results will find applications in triangulation-based or trilateration-based pointing analysis, or any computational geometry application within the context of triangulation.
{"title":"Probabilistic model of triangulation","authors":"Xiaoyun Li, D. Hunter","doi":"10.1145/1364901.1364943","DOIUrl":"https://doi.org/10.1145/1364901.1364943","url":null,"abstract":"This paper analyses the probability that randomly deployed sensor nodes triangulate any point within the target area. Its major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance (2 units) from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another. The expected number of un-triangulated coverage holes, i.e. uncovered areas which cannot be triangulated by adjacent nodes, in a finite target area is derived. Simulation results corroborate the probabilistic analysis with low error, for any node density. These results will find applications in triangulation-based or trilateration-based pointing analysis, or any computational geometry application within the context of triangulation.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"228 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128252678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sphere packing arrangements are frequently found in nature, exhibiting efficient space-filling and energy minimization properties. Close sphere packings provide a tight, uniform, and highly symmetric spatial sampling at a single resolution. We introduce the Multiresolution Sphere Packing Tree (MSP-tree): a hierarchical spatial data structure based on sphere packing arrangements suitable for 3D space representation and selective refinement. Compared to the commonly used octree, MSP-tree offers three advantages: a lower fanout (a factor of four compared to eight), denser packing (about 24% denser), and persistence (sphere centers at coarse resolutions persist at finer resolutions). We present MSP-tree both as a region-based approach that describes the refinement mechanism succintly and intuitively, and as a lattice-based approach better suited for implementation. The MSP-tree offers a robust, highly symmetric tessellation of 3D space with favorable image processing properties.
{"title":"Multiresolution sphere packing tree: a hierarchical multiresolution 3D data structure","authors":"Jiro Inoue, A. J. Stewart","doi":"10.1145/1364901.1364954","DOIUrl":"https://doi.org/10.1145/1364901.1364954","url":null,"abstract":"Sphere packing arrangements are frequently found in nature, exhibiting efficient space-filling and energy minimization properties. Close sphere packings provide a tight, uniform, and highly symmetric spatial sampling at a single resolution. We introduce the Multiresolution Sphere Packing Tree (MSP-tree): a hierarchical spatial data structure based on sphere packing arrangements suitable for 3D space representation and selective refinement. Compared to the commonly used octree, MSP-tree offers three advantages: a lower fanout (a factor of four compared to eight), denser packing (about 24% denser), and persistence (sphere centers at coarse resolutions persist at finer resolutions). We present MSP-tree both as a region-based approach that describes the refinement mechanism succintly and intuitively, and as a lattice-based approach better suited for implementation. The MSP-tree offers a robust, highly symmetric tessellation of 3D space with favorable image processing properties.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132623312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer Tomography (CT) and in particular super fast, 64 and 256 detector CT has rapidly advanced over recent years, such that high resolution cardiac imaging has become a reality. In this paper, we briefly introduce a framework that we have built to construct three dimensional (3D) finite-element and boundary element mesh models of the human heart directly from high resolution CT imaging data. Although, the overall IMAGING-MODELING framework consists of image processing, geometry processing and meshing algorithms, our main focus in this paper will revolve around three key geometry processing steps which are parts of the so-called IMAGING-MODELING framework. These three steps are geometry cleanup or CURATION, anatomy guided annotation or SEGMENTATION and construction of GENERALIZED OFFSET SURFACE. These three algorithms, due to the very nature of the computation involved, can also be thought as parts of a more generalized modeling technique, namely geometric modeling with distance function. As part of the results presented in the paper, we will show that our algorithms are robust enough to effectively deal with the challenges posed by the real-world patient CT data collected from our radiologist collaborators.
{"title":"Multi-component heart reconstruction from volumetric imaging","authors":"C. Bajaj, S. Goswami","doi":"10.1145/1364901.1364928","DOIUrl":"https://doi.org/10.1145/1364901.1364928","url":null,"abstract":"Computer Tomography (CT) and in particular super fast, 64 and 256 detector CT has rapidly advanced over recent years, such that high resolution cardiac imaging has become a reality. In this paper, we briefly introduce a framework that we have built to construct three dimensional (3D) finite-element and boundary element mesh models of the human heart directly from high resolution CT imaging data. Although, the overall IMAGING-MODELING framework consists of image processing, geometry processing and meshing algorithms, our main focus in this paper will revolve around three key geometry processing steps which are parts of the so-called IMAGING-MODELING framework. These three steps are geometry cleanup or CURATION, anatomy guided annotation or SEGMENTATION and construction of GENERALIZED OFFSET SURFACE. These three algorithms, due to the very nature of the computation involved, can also be thought as parts of a more generalized modeling technique, namely geometric modeling with distance function. As part of the results presented in the paper, we will show that our algorithms are robust enough to effectively deal with the challenges posed by the real-world patient CT data collected from our radiologist collaborators.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm to triangulate a multi-resolution hierarchical hexagon mesh. The triangulation provides good triangle strips, which result in efficient rendering of the hexagon mesh, and well proportioned triangles, which avoid rendering artifacts.
{"title":"Triangulation of hierarchical hexagon meshes","authors":"Matthew Guenette, A. J. Stewart","doi":"10.1145/1364901.1364944","DOIUrl":"https://doi.org/10.1145/1364901.1364944","url":null,"abstract":"We present an algorithm to triangulate a multi-resolution hierarchical hexagon mesh. The triangulation provides good triangle strips, which result in efficient rendering of the hexagon mesh, and well proportioned triangles, which avoid rendering artifacts.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122392661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Dupuy, B. Jobard, S. Guillon, N. Keskes, D. Komatitsch
In order to deal with the heavy trend in size increase of volumetric datasets, research in isosurface extraction has focused in the past few years on related aspects such as surface simplification and load balanced parallel algorithms. We present in this paper a parallel, bloc-wise extension of the tandem algorithm [Attali et al. 2005], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate bloc splitting and merging strategy and with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc) that will allow further improved exploration scenarios (small components removal or particularly oriented components selection for instance). For ease of implementation, we carefully describe a master and slave algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a 7000x1600x2000 geophysics dataset.
为了应对体积数据集规模急剧增长的趋势,近年来等值面提取的研究主要集中在曲面简化和负载均衡并行算法等相关方面。我们在本文中提出了对串联算法的并行、分组扩展[Attali et al. 2005],它简化了正在提取的等值面。我们的方法使用适当的块分割和合并策略,并引入组件转储机制,从而最大限度地减少了特定数据集(如地球物理学中遇到的数据集)所需的内存量,从而最大限度地减少了总体内存消耗。一旦检测到,表面组件将与元数据索引(定向边界框,体积等)一起迁移到磁盘,这将允许进一步改进勘探场景(例如小组件移除或特定定向组件选择)。为了便于实现,我们仔细描述了一个主从算法架构,它清楚地分离了四个所需的基本任务。我们展示了并行算法在7000x1600x2000地球物理数据集上的几个应用结果。
{"title":"Isosurface extraction and interpretation on very large datasets in geophysics","authors":"G. Dupuy, B. Jobard, S. Guillon, N. Keskes, D. Komatitsch","doi":"10.1145/1364901.1364932","DOIUrl":"https://doi.org/10.1145/1364901.1364932","url":null,"abstract":"In order to deal with the heavy trend in size increase of volumetric datasets, research in isosurface extraction has focused in the past few years on related aspects such as surface simplification and load balanced parallel algorithms.\u0000 We present in this paper a parallel, bloc-wise extension of the tandem algorithm [Attali et al. 2005], which simplifies on the fly an isosurface being extracted. Our approach minimizes the overall memory consumption using an adequate bloc splitting and merging strategy and with the introduction of a component dumping mechanism that drastically reduces the amount of memory needed for particular datasets such as those encountered in geophysics. As soon as detected, surface components are migrated to the disk along with a meta-data index (oriented bounding box, volume, etc) that will allow further improved exploration scenarios (small components removal or particularly oriented components selection for instance). For ease of implementation, we carefully describe a master and slave algorithm architecture that clearly separates the four required basic tasks. We show several results of our parallel algorithm applied on a 7000x1600x2000 geophysics dataset.","PeriodicalId":216067,"journal":{"name":"Symposium on Solid and Physical Modeling","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121227984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}