首页 > 最新文献

Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)最新文献

英文 中文
Deformable Registration of Textured Range Images by Using Texture and Shape Features 利用纹理和形状特征实现纹理范围图像的形变配准
R. Sagawa, Nanaho Osawa, Y. Yagi
This paper describes a method to align textured range images of deforming objects. The proposed procedure aligns deformable 3D models by matching both texture and shape features. First, the characteristics of each vertex of a 3D mesh model is defined by computing a color histogram for the texture feature and the average signed distance for the shape feature. Next, the key points, which are the distinctive vertices of a model, are extracted with respect to the texture and shape features. Subsequently, the corresponding points are located by matching the key points of the models before and after deformation. The deforming parameters are computed by minimizing the distance between the corresponding points. The proposed method iterates the correspondence search and deformation to align range images. Finally, the deformation for all vertices is computed by interpolating the parameters of the key points. In the experiments, we obtained textured range images by using a real-time range finder and a camera, and evaluated deformable registration for the range images.
本文介绍了一种变形物体的纹理范围图像对齐方法。该方法通过匹配纹理和形状特征来对齐可变形的3D模型。首先,通过计算纹理特征的颜色直方图和形状特征的平均签名距离来定义三维网格模型中每个顶点的特征;其次,根据纹理和形状特征提取关键点,即模型的独特顶点。然后,通过对变形前后模型的关键点进行匹配,定位出相应的点。通过最小化对应点之间的距离来计算变形参数。该方法对距离图像进行对应搜索和变形迭代对齐。最后,通过插值关键点的参数来计算所有顶点的变形。在实验中,我们利用实时测距仪和相机获得了纹理化的距离图像,并评估了距离图像的可变形配准。
{"title":"Deformable Registration of Textured Range Images by Using Texture and Shape Features","authors":"R. Sagawa, Nanaho Osawa, Y. Yagi","doi":"10.1109/3DIM.2007.18","DOIUrl":"https://doi.org/10.1109/3DIM.2007.18","url":null,"abstract":"This paper describes a method to align textured range images of deforming objects. The proposed procedure aligns deformable 3D models by matching both texture and shape features. First, the characteristics of each vertex of a 3D mesh model is defined by computing a color histogram for the texture feature and the average signed distance for the shape feature. Next, the key points, which are the distinctive vertices of a model, are extracted with respect to the texture and shape features. Subsequently, the corresponding points are located by matching the key points of the models before and after deformation. The deforming parameters are computed by minimizing the distance between the corresponding points. The proposed method iterates the correspondence search and deformation to align range images. Finally, the deformation for all vertices is computed by interpolating the parameters of the key points. In the experiments, we obtained textured range images by using a real-time range finder and a camera, and evaluated deformable registration for the range images.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Online Registration of Multi-view Range Images using Geometric and Photometric Feature Tracking 基于几何和光度特征跟踪的多视距图像在线配准
Soon-Yong Park, Jaewon Baek
An on-line 3D registration system is introduced to reconstruct a 3D model using a hand-held stereo camera. Multi-view range images, which are obtained from the hand-held stereo camera, are registered into a reference coordinate system simultaneously. For the coarse registration of a range image, we use the transformation of its previous frame and the centroid of foreground range images. After refining the coarse registration using a geometric feature matching technique, we use a modified KLT (Kanade-Lucas-Tomasi) tracker to match photometric features to enhance registration accuracy. We modify the KLT tracker to facilitate searching of correspondence from the results of the geometric registration. If a range image fails to register, we adjust the orientation of the camera while viewing a graphical user interaction system. After enough range images are registered, they are integrated into a 3D model in offline. Experimental results show that the proposed method can be used to reconstruct 3D models fast and accurately.
介绍了一种利用手持立体相机进行三维模型重建的在线三维配准系统。手持式立体摄像机获取的多视距图像同时配准到参考坐标系中。对于距离图像的粗配准,我们使用前一帧的变换和前景距离图像的质心。在使用几何特征匹配技术改进粗配准后,我们使用改进的KLT (Kanade-Lucas-Tomasi)跟踪器来匹配光度特征以提高配准精度。我们修改了KLT跟踪器,以方便从几何配准结果中搜索对应关系。如果范围图像注册失败,我们在观看图形用户交互系统时调整相机的方向。在注册足够的距离图像后,将其离线集成到3D模型中。实验结果表明,该方法可以快速、准确地重建三维模型。
{"title":"Online Registration of Multi-view Range Images using Geometric and Photometric Feature Tracking","authors":"Soon-Yong Park, Jaewon Baek","doi":"10.1109/3DIM.2007.36","DOIUrl":"https://doi.org/10.1109/3DIM.2007.36","url":null,"abstract":"An on-line 3D registration system is introduced to reconstruct a 3D model using a hand-held stereo camera. Multi-view range images, which are obtained from the hand-held stereo camera, are registered into a reference coordinate system simultaneously. For the coarse registration of a range image, we use the transformation of its previous frame and the centroid of foreground range images. After refining the coarse registration using a geometric feature matching technique, we use a modified KLT (Kanade-Lucas-Tomasi) tracker to match photometric features to enhance registration accuracy. We modify the KLT tracker to facilitate searching of correspondence from the results of the geometric registration. If a range image fails to register, we adjust the orientation of the camera while viewing a graphical user interaction system. After enough range images are registered, they are integrated into a 3D model in offline. Experimental results show that the proposed method can be used to reconstruct 3D models fast and accurately.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"21 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method 基于随机图案背景的体交法轮廓提取
M. Toyoura, M. Iiyama, K. Kakusho, M. Minoh
In this paper, we present a novel approach for extracting silhouettes by using a particular pattern that we call the random pattern. The volume intersection method reconstructs the shapes of 3D objects from their silhouettes obtained with multiple cameras. With the method, if some parts of the silhouettes are missed, the corresponding parts of the reconstructed shapes are also missed. When colors of the objects and the backgrounds are similar, many parts of the silhouettes are missed. We adopt random pattern backgrounds to extract correct silhouettes. The random pattern has many small regions with randomly-selected colors. By using the random pattern backgrounds, we can keep the rate of missing parts below a specified percentage, even for objects of unknown color. To refine the silhouettes, we detect and fill in the missing parts by integrating multiple images. From the images captured by multiple cameras used to observe the object, the object's colors can be estimated. The missing parts can be detected by comparing the object's color with its corresponding background's color. In our experiments, we confirmed that this method effectively extracts silhouettes and reconstructs 3D shapes.
在本文中,我们提出了一种新的方法来提取轮廓,通过使用一个特定的模式,我们称之为随机模式。体交法根据多台摄像机获取的物体轮廓重建三维物体的形状。使用该方法,如果轮廓的某些部分缺失,则重构形状的相应部分也会缺失。当物体的颜色和背景相似时,很多部分的轮廓会被忽略。我们采用随机的图案背景来提取正确的轮廓。随机图案有许多随机选择颜色的小区域。通过使用随机图案背景,我们可以将缺失部分的比率保持在指定的百分比以下,即使对于未知颜色的对象也是如此。为了完善轮廓,我们通过整合多幅图像来检测和填充缺失的部分。从用于观察该物体的多个摄像头拍摄的图像中,可以估计出该物体的颜色。可以通过比较物体的颜色与其相应的背景颜色来检测缺失的部分。在我们的实验中,我们证实了该方法可以有效地提取轮廓并重建三维形状。
{"title":"Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method","authors":"M. Toyoura, M. Iiyama, K. Kakusho, M. Minoh","doi":"10.1109/3DIM.2007.48","DOIUrl":"https://doi.org/10.1109/3DIM.2007.48","url":null,"abstract":"In this paper, we present a novel approach for extracting silhouettes by using a particular pattern that we call the random pattern. The volume intersection method reconstructs the shapes of 3D objects from their silhouettes obtained with multiple cameras. With the method, if some parts of the silhouettes are missed, the corresponding parts of the reconstructed shapes are also missed. When colors of the objects and the backgrounds are similar, many parts of the silhouettes are missed. We adopt random pattern backgrounds to extract correct silhouettes. The random pattern has many small regions with randomly-selected colors. By using the random pattern backgrounds, we can keep the rate of missing parts below a specified percentage, even for objects of unknown color. To refine the silhouettes, we detect and fill in the missing parts by integrating multiple images. From the images captured by multiple cameras used to observe the object, the object's colors can be estimated. The missing parts can be detected by comparing the object's color with its corresponding background's color. In our experiments, we confirmed that this method effectively extracts silhouettes and reconstructs 3D shapes.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122553762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Image-based Model Completion 基于图像的模型补全
A. Brunton, S. Wuhrer, Chang Shu
Geometric models created from range sensors are usually incomplete. Considerable effort has been made to fix this problem, ranging from manual repairing to geometric interpolation. We propose using multi-view stereo to complete such models. Our approach is practical and convenient because when scanning and object or environment one usually takes photographs to texture the resulting model. By using the incomplete scan data as a boundary condition, we use a variational multi-view approach to estimate the missing data.
由距离传感器创建的几何模型通常是不完整的。为了解决这个问题,已经付出了相当大的努力,从手工修复到几何插值。我们建议使用多视点立体来完成这些模型。我们的方法是实用和方便的,因为当扫描物体或环境时,通常需要拍照来纹理生成模型。以不完整扫描数据为边界条件,采用变分多视图方法对缺失数据进行估计。
{"title":"Image-based Model Completion","authors":"A. Brunton, S. Wuhrer, Chang Shu","doi":"10.1109/3DIM.2007.29","DOIUrl":"https://doi.org/10.1109/3DIM.2007.29","url":null,"abstract":"Geometric models created from range sensors are usually incomplete. Considerable effort has been made to fix this problem, ranging from manual repairing to geometric interpolation. We propose using multi-view stereo to complete such models. Our approach is practical and convenient because when scanning and object or environment one usually takes photographs to texture the resulting model. By using the incomplete scan data as a boundary condition, we use a variational multi-view approach to estimate the missing data.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114602429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Tracking of Human Body Parts using the Multiocular Contracting Curve Density Algorithm 基于多眼收缩曲线密度算法的人体部位跟踪
Markus Hahn, Lars Krüger, C. Wöhler, H. Groß
In this contribution we introduce the multiocular contracting curve density algorithm (MOCCD), a novel method for fitting a 3D parametric curve. The MOCCD is integrated into a tracking system and its suitability for tracking human body parts in 3D in front of cluttered background is examined. The developed system can be applied to a variety of body parts, as the object model is replaceable in a simple manner. Based on the example of tracking the human hand-forearm limb it is shown that the use of three MOCCD algorithms with three different kinematic models within the system leads to an accurate and temporally stable tracking. All necessary information is obtained from the images, only a coarse initialisation of the model parameters is required. The investigations are performed on 14 real-world test sequences. These contain movements of different hand-forearm configurations in front of a complex cluttered background. We find that the use of three cameras is essential for an accurate and temporally stable system performance since otherwise the pose estimation and tracking results are strongly affected by the aperture problem. Our best method achieves 95% recognition rate, compared to about 30% for the reference methods of 3D active contours and a curve model tracked by a particle filter. Hence only 5% of the estimated model points exceed a distance of 12 cm with respect to the ground truth, using the proposed method.
本文介绍了一种新的三维参数曲线拟合方法——多眼收缩曲线密度算法(MOCCD)。将MOCCD集成到跟踪系统中,并对其在杂乱背景下对人体部位进行三维跟踪的适用性进行了检验。所开发的系统可以应用于各种身体部位,因为对象模型可以简单地替换。通过对人的手-前臂肢体的跟踪实例表明,在系统内使用三种不同运动模型的MOCCD算法可以实现准确且时间稳定的跟踪。从图像中获得所有必要的信息,只需要对模型参数进行粗初始化。调查是在14个真实世界的测试序列上进行的。这些实验包含了在复杂杂乱的背景下不同手-前臂构型的运动。我们发现使用三台相机对于准确和暂时稳定的系统性能至关重要,否则姿态估计和跟踪结果会受到孔径问题的强烈影响。我们的最佳方法达到95%的识别率,而3D活动轮廓和粒子滤波跟踪曲线模型的参考方法的识别率约为30%。因此,使用所提出的方法,只有5%的估计模型点相对于地面真实值超过12厘米的距离。
{"title":"Tracking of Human Body Parts using the Multiocular Contracting Curve Density Algorithm","authors":"Markus Hahn, Lars Krüger, C. Wöhler, H. Groß","doi":"10.1109/3DIM.2007.59","DOIUrl":"https://doi.org/10.1109/3DIM.2007.59","url":null,"abstract":"In this contribution we introduce the multiocular contracting curve density algorithm (MOCCD), a novel method for fitting a 3D parametric curve. The MOCCD is integrated into a tracking system and its suitability for tracking human body parts in 3D in front of cluttered background is examined. The developed system can be applied to a variety of body parts, as the object model is replaceable in a simple manner. Based on the example of tracking the human hand-forearm limb it is shown that the use of three MOCCD algorithms with three different kinematic models within the system leads to an accurate and temporally stable tracking. All necessary information is obtained from the images, only a coarse initialisation of the model parameters is required. The investigations are performed on 14 real-world test sequences. These contain movements of different hand-forearm configurations in front of a complex cluttered background. We find that the use of three cameras is essential for an accurate and temporally stable system performance since otherwise the pose estimation and tracking results are strongly affected by the aperture problem. Our best method achieves 95% recognition rate, compared to about 30% for the reference methods of 3D active contours and a curve model tracked by a particle filter. Hence only 5% of the estimated model points exceed a distance of 12 cm with respect to the ground truth, using the proposed method.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"263 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127836062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Discrete Delaunay: Boundary extraction from voxel objects 离散Delaunay:体素对象的边界提取
Dobrina Boltcheva, D. Bechmann, S. Thery
We present a discrete approach for boundary extraction from 3D image data. The proposed technique is based on the duality between the Voronoi graph computed accross the digital boundary and the Delaunay triangulation. The originality of the approach is that algorithms perform only integer arithmetic and the method does not suffer from standard round problems and numerical instabilities in the case of floating point computations. This method has been applied both on segmented anatomical structures and on manufactured objects presenting corners and edges. The experimental results show that the method allows to produce a polygonal boundary representation which is guaranteed to be a 2-manifold. This representation is successfully transformed into a triangular quality mesh which meets all topological and geometrical requirements of applications such as augmented reality or simulation.
提出了一种从三维图像数据中提取边界的离散方法。所提出的技术是基于在数字边界上计算的Voronoi图和Delaunay三角剖分之间的对偶性。该方法的独创性在于算法只执行整数运算,并且该方法在浮点计算的情况下不会受到标准整数问题和数值不稳定性的影响。该方法既适用于分割解剖结构,也适用于呈现角和边缘的人造物体。实验结果表明,该方法可以生成多边形边界表示,并保证其为2流形。这种表示成功地转换为三角形质量网格,满足增强现实或仿真等应用的所有拓扑和几何要求。
{"title":"Discrete Delaunay: Boundary extraction from voxel objects","authors":"Dobrina Boltcheva, D. Bechmann, S. Thery","doi":"10.1109/3DIM.2007.21","DOIUrl":"https://doi.org/10.1109/3DIM.2007.21","url":null,"abstract":"We present a discrete approach for boundary extraction from 3D image data. The proposed technique is based on the duality between the Voronoi graph computed accross the digital boundary and the Delaunay triangulation. The originality of the approach is that algorithms perform only integer arithmetic and the method does not suffer from standard round problems and numerical instabilities in the case of floating point computations. This method has been applied both on segmented anatomical structures and on manufactured objects presenting corners and edges. The experimental results show that the method allows to produce a polygonal boundary representation which is guaranteed to be a 2-manifold. This representation is successfully transformed into a triangular quality mesh which meets all topological and geometrical requirements of applications such as augmented reality or simulation.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125482529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Joint Reconstruction and Registration of a Deformable Planar Surface Observed by a 3D Sensor 三维传感器观测可变形平面的关节重建与配准
U. Castellani, V. Gay-Bellile, A. Bartoli
We address the problem of reconstruction and registration of a deforming 3D surface observed by some 3D sensor giving a cloud of 3D points at each time instant. This problem is difficult since the basic data term does not provide enough constraints. We bring two main contributions. First, we examine a set of data and penalty terms that make the problem well-posed. The most important terms we introduce are the non- extensibility penalty and the attraction to boundary shape. Second, we show how the error function combining all these terms can be efficiently minimized with the Levenberg-Marquardt algorithm and sparse matrices. We report convincing results for challenging datasets coming from different kinds of 3D sensors. The algorithm is robust to missing and erroneous data points, and to spurious boundary detection.
我们解决了三维传感器观测到的三维变形表面的重建和配准问题,给出了每个时刻的三维点云。这个问题很困难,因为基本数据项没有提供足够的约束。我们带来了两个主要贡献。首先,我们检查了一组数据和惩罚条款,这些数据和惩罚条款使问题具有良好的立足点。我们引入的最重要的术语是不可扩展性惩罚和边界形状吸引。其次,我们展示了如何使用Levenberg-Marquardt算法和稀疏矩阵有效地最小化组合所有这些项的误差函数。我们报告了来自不同类型3D传感器的具有挑战性的数据集的令人信服的结果。该算法对缺失点、错误点和伪边界检测具有较强的鲁棒性。
{"title":"Joint Reconstruction and Registration of a Deformable Planar Surface Observed by a 3D Sensor","authors":"U. Castellani, V. Gay-Bellile, A. Bartoli","doi":"10.1109/3DIM.2007.31","DOIUrl":"https://doi.org/10.1109/3DIM.2007.31","url":null,"abstract":"We address the problem of reconstruction and registration of a deforming 3D surface observed by some 3D sensor giving a cloud of 3D points at each time instant. This problem is difficult since the basic data term does not provide enough constraints. We bring two main contributions. First, we examine a set of data and penalty terms that make the problem well-posed. The most important terms we introduce are the non- extensibility penalty and the attraction to boundary shape. Second, we show how the error function combining all these terms can be efficiently minimized with the Levenberg-Marquardt algorithm and sparse matrices. We report convincing results for challenging datasets coming from different kinds of 3D sensors. The algorithm is robust to missing and erroneous data points, and to spurious boundary detection.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122676291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Surface Modelling of Plants from Stereo Images 基于立体图像的植物表面建模
Yu Song, R. Wilson, R. Edmondson, N. Parsons
Plants are characterised by a range of complex and variable attributes, and measuring these attributes accurately and reliably is a major challenge for the industry. In this paper, we investigate creating a surface model of plant from images taken by a stereo pair of cameras. The proposed modelling architecture comprises a fast stereo algorithm to estimate depths in the scene and a model of the scene based on visual appearance and 3D geometry measurements. Our stereo algorithm employs a coarse-fine strategy for disparity estimation. We develop a weighting method and use Kalman filter to refine estimations across scales. A self-organising map is applied to reconstruct a surface from these sample points created by the stereo algorithm. We compare and evaluate our stereo results against other popular stereo algorithms, and also demonstrate that the proposed surface model can be used to extract useful plant features that can be of importance in plant management and assessing quality for marketing.
植物具有一系列复杂和可变的属性,准确可靠地测量这些属性是该行业面临的主要挑战。在本文中,我们研究了从一对立体相机拍摄的图像中创建植物表面模型。所提出的建模架构包括用于估计场景深度的快速立体算法和基于视觉外观和三维几何测量的场景模型。我们的立体算法采用粗-精策略进行视差估计。我们开发了一种加权方法,并使用卡尔曼滤波来改进跨尺度的估计。一个自组织映射被应用于从这些由立体算法创建的样本点重建一个表面。我们将我们的立体结果与其他流行的立体算法进行了比较和评估,并证明了所提出的表面模型可用于提取有用的植物特征,这些特征在植物管理和营销质量评估中很重要。
{"title":"Surface Modelling of Plants from Stereo Images","authors":"Yu Song, R. Wilson, R. Edmondson, N. Parsons","doi":"10.1109/3DIM.2007.55","DOIUrl":"https://doi.org/10.1109/3DIM.2007.55","url":null,"abstract":"Plants are characterised by a range of complex and variable attributes, and measuring these attributes accurately and reliably is a major challenge for the industry. In this paper, we investigate creating a surface model of plant from images taken by a stereo pair of cameras. The proposed modelling architecture comprises a fast stereo algorithm to estimate depths in the scene and a model of the scene based on visual appearance and 3D geometry measurements. Our stereo algorithm employs a coarse-fine strategy for disparity estimation. We develop a weighting method and use Kalman filter to refine estimations across scales. A self-organising map is applied to reconstruct a surface from these sample points created by the stereo algorithm. We compare and evaluate our stereo results against other popular stereo algorithms, and also demonstrate that the proposed surface model can be used to extract useful plant features that can be of importance in plant management and assessing quality for marketing.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134117360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Aerial Lidar Data Classification using AdaBoost 使用AdaBoost的航空激光雷达数据分类
S. Lodha, D. Fitzpatrick, D. Helmbold
We use the AdaBoost algorithm to classify 3D aerial lidar scattered height data into four categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken from lidar data collected over an area of approximately eight square miles, obtaining higher than 92% accuracy. We also apply our classifier to our entire dataset, and present visual classification results both with and without uncertainty. We implement and experiment with several variations within the AdaBoost family of algorithms. We observe that our results are robust and stable over all the various tests and algorithmic variations. We also investigate features and values that are most critical in distinguishing between the classes. This insight is important in extending the results from one geographic region to another.
我们使用AdaBoost算法将3D航空激光雷达散射高度数据分为四类:道路、草地、建筑物和树木。为此,我们使用了五个特征:高度、高度变化、正常变化、激光雷达回波强度和图像强度。我们还仅使用激光雷达衍生的特征将数据组织为三类(道路和草地类被合并)。我们使用从大约8平方英里的区域收集的激光雷达数据中提取的10个区域应用并测试了我们的结果,获得了高于92%的精度。我们还将我们的分类器应用于我们的整个数据集,并呈现具有和不具有不确定性的视觉分类结果。我们在AdaBoost算法家族中实现和实验了几种变体。我们观察到我们的结果在所有各种测试和算法变化中都是鲁棒和稳定的。我们还研究了在区分类别时最关键的特征和值。这种洞察力对于将结果从一个地理区域扩展到另一个地理区域非常重要。
{"title":"Aerial Lidar Data Classification using AdaBoost","authors":"S. Lodha, D. Fitzpatrick, D. Helmbold","doi":"10.1109/3DIM.2007.10","DOIUrl":"https://doi.org/10.1109/3DIM.2007.10","url":null,"abstract":"We use the AdaBoost algorithm to classify 3D aerial lidar scattered height data into four categories: road, grass, buildings, and trees. To do so we use five features: height, height variation, normal variation, lidar return intensity, and image intensity. We also use only lidar-derived features to organize the data into three classes (the road and grass classes are merged). We apply and test our results using ten regions taken from lidar data collected over an area of approximately eight square miles, obtaining higher than 92% accuracy. We also apply our classifier to our entire dataset, and present visual classification results both with and without uncertainty. We implement and experiment with several variations within the AdaBoost family of algorithms. We observe that our results are robust and stable over all the various tests and algorithmic variations. We also investigate features and values that are most critical in distinguishing between the classes. This insight is important in extending the results from one geographic region to another.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121269563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
Light Transport Analysis for 3D Photography 三维摄影的光输运分析
Kiriakos N. Kutulakos
While 3D photography research has enjoyed tremendous success, many everyday objects and materials are still difficult or impossible to capture in 3D. An important stumbling block is that typical algorithms do not consider the effects of light transport, i.e., the sequence of bounces, refractions and scattering events that may occur when light interacts with an object. This puts objects with transparent materials or highly-reflective surfaces (clear plastic, crystal, liquids, polished metal, etc.) outside the reach of current 3D scanning techniques. To overcome these limitations, we have been investigating algorithms that explicitly analyze the light transport process caused by such objects (N. Morris and K.N. Kutulakos, 2005), (K.N. Kutulakos and E. Steger, 2005). These algorithms rely on 2D photos taken from multiple views and reconstruct the individual 3D path(s) that light must have traced in order to reach each pixel. Despite the apparent intractability of this endeavor, our results suggest that reasoning about light transport can produce rich descriptions of surface geometry for objects with complex optical properties.
虽然3D摄影研究取得了巨大的成功,但许多日常物品和材料仍然难以或不可能在3D中捕获。一个重要的绊脚石是,典型的算法没有考虑光传输的影响,即光与物体相互作用时可能发生的反弹、折射和散射事件的顺序。这使得具有透明材料或高反射表面(透明塑料,水晶,液体,抛光金属等)的物体超出了当前3D扫描技术的范围。为了克服这些限制,我们一直在研究明确分析由这些物体引起的光传输过程的算法(N. Morris和K.N. Kutulakos, 2005), (K.N. Kutulakos和E. Steger, 2005)。这些算法依赖于从多个视图拍摄的2D照片,并重建光线必须跟踪的单个3D路径,以便到达每个像素。尽管这项工作显然很棘手,但我们的结果表明,光输运的推理可以为具有复杂光学性质的物体提供丰富的表面几何描述。
{"title":"Light Transport Analysis for 3D Photography","authors":"Kiriakos N. Kutulakos","doi":"10.1109/3DIM.2007.33","DOIUrl":"https://doi.org/10.1109/3DIM.2007.33","url":null,"abstract":"While 3D photography research has enjoyed tremendous success, many everyday objects and materials are still difficult or impossible to capture in 3D. An important stumbling block is that typical algorithms do not consider the effects of light transport, i.e., the sequence of bounces, refractions and scattering events that may occur when light interacts with an object. This puts objects with transparent materials or highly-reflective surfaces (clear plastic, crystal, liquids, polished metal, etc.) outside the reach of current 3D scanning techniques. To overcome these limitations, we have been investigating algorithms that explicitly analyze the light transport process caused by such objects (N. Morris and K.N. Kutulakos, 2005), (K.N. Kutulakos and E. Steger, 2005). These algorithms rely on 2D photos taken from multiple views and reconstruct the individual 3D path(s) that light must have traced in order to reach each pixel. Despite the apparent intractability of this endeavor, our results suggest that reasoning about light transport can produce rich descriptions of surface geometry for objects with complex optical properties.","PeriodicalId":442311,"journal":{"name":"Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123528179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1