Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.48
Romaric Audigier, R. Lotufo
In a recent paper [1], a new type of watershed (WS) transform was introduced: the tie-zone watershed (TZWS). This region-based watershed transform does not depend on arbitrary implementation and provides a unique (and thereby unbiased) optimal solution. Indeed, many optimal solutions are sometimes possible when segmenting an image by WS. The TZWS assigns each pixel to a catchment basin (CB) if in all solutions it belongs to this CB. Otherwise, the pixel is said to belong to a tie-zone (TZ). An efficient algorithm computing the TZWS and based on the Image Foresting Transform (IFT) was also proposed. In this article, we define the new concept of "bottlenecks" in the watermerging paradigm. Intuitively, the bottlenecks are the first contact points between at least two different wave fronts. They are pixels in the image where different colored waters meet and tie and from which may begin, therefore, the tie-zones. They represent the origin points or the access of the tie-zones (regions that cannot be labeled without making arbitrary choices). If they are preferentially assigned to one or another colored water according to an arbitrary processing order, as occurs in most of watershed algorithm, an entire region (its influence zone - the "bottle"!) is conquered together. The bottlenecks play therefore an important role in the bias that could be introduced by a WS implementation. It is why we show in this paper that both tie-zones and bottlenecks analysis can be associated with the robustness of a segmentation.
在最近的一篇论文[1]中,介绍了一种新的分水岭(WS)变换:tie-zone分水岭(TZWS)。这种基于区域的分水岭变换不依赖于任意的实现,并提供了一个唯一的(因此是无偏的)最优解。事实上,当使用WS分割图像时,有时可能会有许多最优解决方案。TZWS将每个像素分配给一个集水区(CB),如果在所有解中它属于这个集水区。否则,该像素被认为属于tie zone (TZ)。提出了一种基于图像森林变换(IFT)的有效的TZWS计算算法。在本文中,我们定义了水合并范式中“瓶颈”的新概念。直观地说,瓶颈是至少两个不同波前之间的第一个接触点。它们是图像中不同颜色的水相遇并结合的像素,因此可以从这些像素开始,因此称为结合区。它们代表原点或连接区(不进行任意选择就不能标记的区域)的入口。如果根据任意的处理顺序将它们优先分配给一种或另一种颜色的水,就像大多数分水岭算法中发生的那样,整个区域(其影响区域-“瓶子”!)被一起征服。因此,瓶颈在WS实现可能引入的偏差中起着重要作用。这就是为什么我们在本文中表明,捆绑区和瓶颈分析都可以与分割的鲁棒性相关联。
{"title":"Tie-Zone Watershed, Bottlenecks, and Segmentation Robustness Analysis","authors":"Romaric Audigier, R. Lotufo","doi":"10.1109/SIBGRAPI.2005.48","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.48","url":null,"abstract":"In a recent paper [1], a new type of watershed (WS) transform was introduced: the tie-zone watershed (TZWS). This region-based watershed transform does not depend on arbitrary implementation and provides a unique (and thereby unbiased) optimal solution. Indeed, many optimal solutions are sometimes possible when segmenting an image by WS. The TZWS assigns each pixel to a catchment basin (CB) if in all solutions it belongs to this CB. Otherwise, the pixel is said to belong to a tie-zone (TZ). An efficient algorithm computing the TZWS and based on the Image Foresting Transform (IFT) was also proposed. In this article, we define the new concept of \"bottlenecks\" in the watermerging paradigm. Intuitively, the bottlenecks are the first contact points between at least two different wave fronts. They are pixels in the image where different colored waters meet and tie and from which may begin, therefore, the tie-zones. They represent the origin points or the access of the tie-zones (regions that cannot be labeled without making arbitrary choices). If they are preferentially assigned to one or another colored water according to an arbitrary processing order, as occurs in most of watershed algorithm, an entire region (its influence zone - the \"bottle\"!) is conquered together. The bottlenecks play therefore an important role in the bias that could be introduced by a WS implementation. It is why we show in this paper that both tie-zones and bottlenecks analysis can be associated with the robustness of a segmentation.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121737678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design of W-operators from a set of input/output examples for large windows is a hard problem. From the statistical standpoint, it is hard because of the large number of examples necessary to obtain a good estimate of the joint distribution. From the computational standpoint, as the number of examples grows memory and time requirements can reach a point where it is not feasible to design the operator. This paper introduces a technique for joint distribution estimation in W-operator design. The distribution is represented by a multiresolution pyramidal structure and the mean conditional entropy is proposed as a criterion to choose between distributions induced by different pyramids. Experimental results are presented for maximum-likelihood classifiers designed for the problem of handwritten digits classification. The analysis shows that the technique is interesting from the theoretical point of view and has potential to be applied in computer vision and image processing problems.
{"title":"A Maximum-Likelihood Approach for Multiresolution W-Operator Design","authors":"D. Vaquero, J. Barrera, R. Hirata","doi":"10.1109/SIBGRAPI.2005.7","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.7","url":null,"abstract":"The design of W-operators from a set of input/output examples for large windows is a hard problem. From the statistical standpoint, it is hard because of the large number of examples necessary to obtain a good estimate of the joint distribution. From the computational standpoint, as the number of examples grows memory and time requirements can reach a point where it is not feasible to design the operator. This paper introduces a technique for joint distribution estimation in W-operator design. The distribution is represented by a multiresolution pyramidal structure and the mean conditional entropy is proposed as a criterion to choose between distributions induced by different pyramids. Experimental results are presented for maximum-likelihood classifiers designed for the problem of handwritten digits classification. The analysis shows that the technique is interesting from the theoretical point of view and has potential to be applied in computer vision and image processing problems.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129150009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.26
C. Solana, Leandro A. F. Fernandes, E. Justino, M. M. O. Neto, Roberto da Silva, Luiz Oliveira, Flávio Bortolozzi, G. Crespo
We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. We demonstrated through comprehensive experiments that this feature-matching-based procedure produces interesting results for the problem of document reconstruction.
{"title":"Document Reconstruction Based on Feature Matching","authors":"C. Solana, Leandro A. F. Fernandes, E. Justino, M. M. O. Neto, Roberto da Silva, Luiz Oliveira, Flávio Bortolozzi, G. Crespo","doi":"10.1109/SIBGRAPI.2005.26","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.26","url":null,"abstract":"We describe a procedure for reconstructing documents that have been shredded by hand, a problem that often arises in forensics. The proposed method first applies a polygonal approximation in order to reduce the complexity of the boundaries and then extracts relevant features of the polygon to carry out the local reconstruction. In this way the overall complexity can be dramatically reduced because few features are used to perform the matching. The ambiguities resulting from the local reconstruction are resolved and the pieces are merged together as we search for a global solution. We demonstrated through comprehensive experiments that this feature-matching-based procedure produces interesting results for the problem of document reconstruction.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123716880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.30
G. Silva, M. Urban
This paper presents an extension of vibro-acoustography imaging technique. The standard technique relies on the single-frequency dynamic radiation force (or stress) produced by a highly focused dual-frequency ultrasound beam. We propose a multifrequency vibro-acoustography method based on the radiation stress generated by a beam with multiple frequencies. The system point-spread function (PSF) is obtained in terms of the acoustic emission by a point-target in response to the employed radiation stress. The PSF is evaluated for an eight- and a sixteen-element sector array transducers. Three phantom images are used to show how the system transforms them into observed data. Considering only visual criteria such as contrast and resolution, simulations show the sixteen-element sector transducer renders better images.
{"title":"Image Formation of Multifrequency Vibro-Acoustography: Theory and Computational Simulations","authors":"G. Silva, M. Urban","doi":"10.1109/SIBGRAPI.2005.30","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.30","url":null,"abstract":"This paper presents an extension of vibro-acoustography imaging technique. The standard technique relies on the single-frequency dynamic radiation force (or stress) produced by a highly focused dual-frequency ultrasound beam. We propose a multifrequency vibro-acoustography method based on the radiation stress generated by a beam with multiple frequencies. The system point-spread function (PSF) is obtained in terms of the acoustic emission by a point-target in response to the employed radiation stress. The PSF is evaluated for an eight- and a sixteen-element sector array transducers. Three phantom images are used to show how the system transforms them into observed data. Considering only visual criteria such as contrast and resolution, simulations show the sixteen-element sector transducer renders better images.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"25 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114121289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.21
K. Debattista, A. Chalmers
High-fidelity renderings of virtual environments is a notoriously computationally expensive task. One commonly used method to alleviate the costs is to adaptively sample the rendered images identifying the required number of samples according to the variance of the area thus reducing aliasing and concurrently reducing the total number of rays shot. When using ray tracing algorithms, the traditional method is to shoot a number of rays and depending on the difference between the radiance of the samples further rays may be shot. This approach fails to take into account that different components of a material reflecting light may exhibit more coherence than others. With this in mind we present a component-based adaptive sampling algorithm that renders components individually and adaptively samples at the component level, finally composting the result to produce a full solution. Results demonstrate a significant improvement in performance without any perceptual loss in quality.
{"title":"Component-Based Adaptive Sampling","authors":"K. Debattista, A. Chalmers","doi":"10.1109/SIBGRAPI.2005.21","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.21","url":null,"abstract":"High-fidelity renderings of virtual environments is a notoriously computationally expensive task. One commonly used method to alleviate the costs is to adaptively sample the rendered images identifying the required number of samples according to the variance of the area thus reducing aliasing and concurrently reducing the total number of rays shot. When using ray tracing algorithms, the traditional method is to shoot a number of rays and depending on the difference between the radiance of the samples further rays may be shot. This approach fails to take into account that different components of a material reflecting light may exhibit more coherence than others. With this in mind we present a component-based adaptive sampling algorithm that renders components individually and adaptively samples at the component level, finally composting the result to produce a full solution. Results demonstrate a significant improvement in performance without any perceptual loss in quality.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128818361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.35
E. Neto, A. R. Pierro
Relaxation is widely recognized as a useful tool for providing convergence in block-iterative algorithms [1], [2], [6]. In the present article we give new results on the convergence of RAMLA (Row Action Maximum Likelihood Algorithm) [2], filling some important theoretical gaps. Furthermore, because RAMLA and OS-EM (Ordered Subsets - Expectation Maximization) [4] are the algorithms for statistical reconstruction currently being used in commercial emission tomography scanners, we present a comparison between them from the viewpoint of a specific imaging task. Our experiments show the importance of relaxation to improve image quality.
松弛被广泛认为是块迭代算法中提供收敛性的有用工具[1],[2],[6]。在本文中,我们给出了关于RAMLA (Row Action Maximum Likelihood Algorithm)收敛性的新结果[2],填补了一些重要的理论空白。此外,由于RAMLA和OS-EM(有序子集-期望最大化)[4]是目前商用发射断层扫描仪中使用的统计重建算法,我们从特定成像任务的角度对它们进行了比较。我们的实验证明了松弛对提高图像质量的重要性。
{"title":"On the Effect of Relaxation in the Convergence and Quality of Statistical Image Reconstruction for Emission Tomography Using Block-Iterative Algorithms","authors":"E. Neto, A. R. Pierro","doi":"10.1109/SIBGRAPI.2005.35","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.35","url":null,"abstract":"Relaxation is widely recognized as a useful tool for providing convergence in block-iterative algorithms [1], [2], [6]. In the present article we give new results on the convergence of RAMLA (Row Action Maximum Likelihood Algorithm) [2], filling some important theoretical gaps. Furthermore, because RAMLA and OS-EM (Ordered Subsets - Expectation Maximization) [4] are the algorithms for statistical reconstruction currently being used in commercial emission tomography scanners, we present a comparison between them from the viewpoint of a specific imaging task. Our experiments show the importance of relaxation to improve image quality.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117244448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.49
D. S. Pires, R. M. C. Junior, M. Vieira, L. Velho
This work presents a method for the detection, tracking and spatial matching of connected components in a 3D video stream. The video image information is combined with 3D sites in order to align pieces of surfaces that are seen in subsequent frames. This is a key step in 3D video analysis for enabling several applications such as compression, geometric integration and scene reconstruction, to name a few. Our approach is to detect salient features in both image and world spaces for further alignment of texture and geometry. We use a projector-camera system to obtain high quality texture and geometry at 30 fps. Successful experimental results corroborating our method are shown.
{"title":"Tracking and Matching Connected Components from 3D Video","authors":"D. S. Pires, R. M. C. Junior, M. Vieira, L. Velho","doi":"10.1109/SIBGRAPI.2005.49","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.49","url":null,"abstract":"This work presents a method for the detection, tracking and spatial matching of connected components in a 3D video stream. The video image information is combined with 3D sites in order to align pieces of surfaces that are seen in subsequent frames. This is a key step in 3D video analysis for enabling several applications such as compression, geometric integration and scene reconstruction, to name a few. Our approach is to detect salient features in both image and world spaces for further alignment of texture and geometry. We use a projector-camera system to obtain high quality texture and geometry at 30 fps. Successful experimental results corroborating our method are shown.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116544331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.15
Julio C. S. Jacques Junior, C. Jung, S. Musse
Tracking moving objects in video sequence is an important problem in computer vision, with applications several fields, such as video surveillance and target tracking. Most techniques reported in the literature use background subtraction techniques to obtain foreground objects, and apply shadow detection algorithms exploring spectral information of the images to retrieve only valid moving objects. In this paper, we propose a small improvement to an existing background model, and incorporate a novel technique for shadow detection in grayscale video sequences. The proposed algorithm works well for both indoor and outdoor sequences, and does not require the use of color cameras.
{"title":"Background Subtraction and Shadow Detection in Grayscale Video Sequences","authors":"Julio C. S. Jacques Junior, C. Jung, S. Musse","doi":"10.1109/SIBGRAPI.2005.15","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.15","url":null,"abstract":"Tracking moving objects in video sequence is an important problem in computer vision, with applications several fields, such as video surveillance and target tracking. Most techniques reported in the literature use background subtraction techniques to obtain foreground objects, and apply shadow detection algorithms exploring spectral information of the images to retrieve only valid moving objects. In this paper, we propose a small improvement to an existing background model, and incorporate a novel technique for shadow detection in grayscale video sequences. The proposed algorithm works well for both indoor and outdoor sequences, and does not require the use of color cameras.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128086102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.51
P. A. Miranda, R. Torres, A. Falcão
We present tensor scale descriptor (TSD) — a shape descriptor for content-based image retrieval, registration, and analysis. TSD exploits the notion of local structure thickness, orientation, and anisotropy as represented by the largest ellipse centered at each image pixel and within the same homogeneous region. The proposed method uses the normalized histogram of the local orientation (the angle of the ellipse) at regions of high anisotropy and thickness within a certain interval. It is shown that TSD is invariant to rotation and to some reasonable level of scale changes. Experimental results with a fish database are presented to illustrate and validate the method.
{"title":"TSD: A Shape Descriptor Based on a Distribution of Tensor Scale Local Orientation","authors":"P. A. Miranda, R. Torres, A. Falcão","doi":"10.1109/SIBGRAPI.2005.51","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.51","url":null,"abstract":"We present tensor scale descriptor (TSD) — a shape descriptor for content-based image retrieval, registration, and analysis. TSD exploits the notion of local structure thickness, orientation, and anisotropy as represented by the largest ellipse centered at each image pixel and within the same homogeneous region. The proposed method uses the normalized histogram of the local orientation (the angle of the ellipse) at regions of high anisotropy and thickness within a certain interval. It is shown that TSD is invariant to rotation and to some reasonable level of scale changes. Experimental results with a fish database are presented to illustrate and validate the method.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"46 44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128034247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-10-09DOI: 10.1109/SIBGRAPI.2005.32
Julia Taylor-Hell
We present a method for creating tree models with realistically curved branches, useful in the portrayal of natural scenes. Instead of attempting to replicate a tree’s final shape by observation, we obtain this shape as nature does — by considering the tree’s development in the context of its environment. The final shape of the branches results from their growth in length, girth, weight and rigidity under the influence of gravity and tropisms. Using the framework of L-systems, we extend Jirasek’s biomechanical simulation of a plant axis to correctly represent an entire tree. Our model also simulates the reaction wood which actively re-orients a leaning branch by differentiating the wood production in angular portions of the branch cross-section. To obtain realistic and controllable tree architectures, we regulate growth elements in the model using functions based on botanical findings. We create a multi-year simulation of tree growth under environmental influences, obtaining a realistic tree shape at every stage of its development.
{"title":"Incorporating Biomechanics into Architectural Tree Models","authors":"Julia Taylor-Hell","doi":"10.1109/SIBGRAPI.2005.32","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2005.32","url":null,"abstract":"We present a method for creating tree models with realistically curved branches, useful in the portrayal of natural scenes. Instead of attempting to replicate a tree’s final shape by observation, we obtain this shape as nature does — by considering the tree’s development in the context of its environment. The final shape of the branches results from their growth in length, girth, weight and rigidity under the influence of gravity and tropisms. Using the framework of L-systems, we extend Jirasek’s biomechanical simulation of a plant axis to correctly represent an entire tree. Our model also simulates the reaction wood which actively re-orients a leaning branch by differentiating the wood production in angular portions of the branch cross-section. To obtain realistic and controllable tree architectures, we regulate growth elements in the model using functions based on botanical findings. We create a multi-year simulation of tree growth under environmental influences, obtaining a realistic tree shape at every stage of its development.","PeriodicalId":193103,"journal":{"name":"XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI'05)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115912825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}