首页 > 最新文献

CVGIP: Image Understanding最新文献

英文 中文
A Hough Transform Algorithm with a 2D Hypothesis Testing Kernel 基于二维假设检验核的Hough变换算法
Pub Date : 1993-09-01 DOI: 10.1006/ciun.1993.1039
Palmer P.L., Petrou M., Kittler J.

In this paper we consider a Hough transform line finding algorithm in which the voting kernel is a smooth function of differences in both line parameters. The shape of the voting kernel is decided in terms of a hypothesis testing approach, and the shape is adjusted to give optimal results. We show that this new kernel is robust to changes in the distribution of the underlying noise and the implementation is very fast, taking typically 2-3 s on a Sparc 2 workstation for a 256 × 256 image.

本文考虑了一种Hough变换寻线算法,其中投票核是两条线参数差异的光滑函数。根据假设检验方法确定投票核的形状,并调整形状以获得最佳结果。我们表明,这个新内核对底层噪声分布的变化具有鲁棒性,并且实现速度非常快,在Sparc 2工作站上处理256 × 256的图像通常需要2-3秒。
{"title":"A Hough Transform Algorithm with a 2D Hypothesis Testing Kernel","authors":"Palmer P.L.,&nbsp;Petrou M.,&nbsp;Kittler J.","doi":"10.1006/ciun.1993.1039","DOIUrl":"https://doi.org/10.1006/ciun.1993.1039","url":null,"abstract":"<div><p>In this paper we consider a Hough transform line finding algorithm in which the voting kernel is a smooth function of differences in <em>both</em> line parameters. The shape of the voting kernel is decided in terms of a hypothesis testing approach, and the shape is adjusted to give optimal results. We show that this new kernel is robust to changes in the distribution of the underlying noise and the implementation is very fast, taking typically 2-3 s on a Sparc 2 workstation for a 256 × 256 image.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 2","pages":"Pages 221-234"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1039","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136714368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical Modeling and Combination of Range and Intensity Edge Data 物理建模及距离和强度边缘数据的组合
Pub Date : 1993-09-01 DOI: 10.1006/ciun.1993.1038
Zhang G.H., Wallace A.

We present a method for semantic labelling of edges and reconstruction of range data by fusion of registered range and intensity images. An initial set of edge labels is derived using a physical model of object geometry and shading. A final edge classification and range reconstruction are obtained using Bayesian estimation within coupled Markov random fields employing constraints of surface smoothness and edge continuity. The approach is demonstrated on synthetic and real source data, obtained from an active laser rangefinder.

提出了一种边缘语义标记和距离数据重建的方法。使用物体几何和阴影的物理模型派生出一组初始边缘标签。利用表面光滑性和边缘连续性约束,利用耦合马尔可夫随机场内的贝叶斯估计得到最终的边缘分类和距离重建。该方法在主动式激光测距仪的合成源数据和真实源数据上进行了验证。
{"title":"Physical Modeling and Combination of Range and Intensity Edge Data","authors":"Zhang G.H.,&nbsp;Wallace A.","doi":"10.1006/ciun.1993.1038","DOIUrl":"10.1006/ciun.1993.1038","url":null,"abstract":"<div><p>We present a method for semantic labelling of edges and reconstruction of range data by fusion of registered range and intensity images. An initial set of edge labels is derived using a physical model of object geometry and shading. A final edge classification and range reconstruction are obtained using Bayesian estimation within coupled Markov random fields employing constraints of surface smoothness and edge continuity. The approach is demonstrated on synthetic and real source data, obtained from an active laser rangefinder.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 2","pages":"Pages 191-220"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77838111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Rapid Octree Construction from Image Sequences 从图像序列快速八叉树构建
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1029
Szeliski R.

The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time.

从不同视点拍摄的图像中构建三维物体模型是计算机视觉中的一个重要问题。最简单的方法之一是使用对象的轮廓(将图像分为对象和背景的二分类)来构建对象的边界体。为了有效地表示这个体积,我们使用八叉树,它将对象表示为递归细分的立方体树。我们开发了一种从多个轮廓计算八叉树边界体积的新算法,并将其应用于固定摄像机前转盘上旋转的物体。该算法对每个视点执行有限的处理,并逐步构建体积模型。由此产生的算法比以前的算法需要更少的总计算量,运行接近实时,并且构建的模型的分辨率随着时间的推移而提高。
{"title":"Rapid Octree Construction from Image Sequences","authors":"Szeliski R.","doi":"10.1006/ciun.1993.1029","DOIUrl":"10.1006/ciun.1993.1029","url":null,"abstract":"<div><p>The construction of a three-dimensional object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, we use an octree, which represents the object as a tree of recursively subdivided cubes. We develop a new algorithm for computing the octree bounding volume from multiple silhouettes and apply it to an object rotating on a turntable in front of a stationary camera. The algorithm performs a limited amount of processing for each viewpoint and incrementally builds the volumetric model. The resulting algorithm requires less total computation than previous algorithms, runs in close to real-time, and builds a model whose resolution improves over time.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 23-32"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1029","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87565986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 632
Image Analysis and Computer Vision: 1992 图像分析与计算机视觉;1992
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1033
Rosenfeld A.

This paper presents a bibliography of nearly 1900 references related to computer vision and image analysis, arranged by subject matter. The topics covered include architectures; computational techniques; feature detection and segmentation; image analysis; two-dimensional shape; pattern; color and texture; matching and stereo; three-dimensional recovery and analysis; three-dimensional shape; and motion. A few references are also given on related topics, such as geometry, graphics, image input/output and coding, image processing, optical processing, visual perception, neural nets, pattern recognition, and artificial intelligence, as well as on applications.

本文提供了近1900篇与计算机视觉和图像分析相关的参考文献的参考书目,按主题排列。涵盖的主题包括体系结构;计算技术;特征检测与分割;图像分析;二维形状;模式;颜色和质地;匹配与立体;三维恢复与分析;三维形状;和运动。一些参考文献也给出了相关的主题,如几何,图形,图像输入/输出和编码,图像处理,光学处理,视觉感知,神经网络,模式识别和人工智能,以及应用。
{"title":"Image Analysis and Computer Vision: 1992","authors":"Rosenfeld A.","doi":"10.1006/ciun.1993.1033","DOIUrl":"10.1006/ciun.1993.1033","url":null,"abstract":"<div><p>This paper presents a bibliography of nearly 1900 references related to computer vision and image analysis, arranged by subject matter. The topics covered include architectures; computational techniques; feature detection and segmentation; image analysis; two-dimensional shape; pattern; color and texture; matching and stereo; three-dimensional recovery and analysis; three-dimensional shape; and motion. A few references are also given on related topics, such as geometry, graphics, image input/output and coding, image processing, optical processing, visual perception, neural nets, pattern recognition, and artificial intelligence, as well as on applications.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 85-135"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1033","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84202531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Extracting Geometric Primitives 提取几何原语
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1028
Roth G., Levine M.D.

Extracting geometric primitives is an important task in model-based computer vision. The Hough transform is the most common method of extracting geometric primitives. Recently, methods derived from the field of robust statistics have been used for this purpose. We show that extracting a single geometric primitive is equivalent to finding the optimum value of a cost function which has potentially many local minima. Besides providing a unifying way of understanding different primitive extraction algorithms, this model also shows that for efficient extraction the true global minimum must be found with as few evaluations of the cost function as possible. In order to extract a single geometric primitive we choose a number of minimal subsets randomly from the geometric data. The cost function is evaluated for each of these, and the primitive defined by the subset with the best value of the cost function is extracted from the geometric data. To extract multiple primitives, this process is repeated on the geometric data that do not belong to the primitive. The resulting extraction algorithm can be used with a wide variety of geometric primitives and geometric data. It is easily parallelized, and we describe some possible implementations on a variety of parallel architectures. We make a detailed comparison with the Hough transform and show that it has a number of advantages over this classic technique.

几何基元的提取是基于模型的计算机视觉中的重要任务。霍夫变换是提取几何基元最常用的方法。最近,从稳健统计领域衍生的方法已用于此目的。我们证明了提取一个单一的几何原语相当于找到一个成本函数的最优值,它可能有许多局部最小值。除了提供一种理解不同原语提取算法的统一方法外,该模型还表明,为了有效提取,必须在尽可能少的代价函数评估的情况下找到真正的全局最小值。为了提取单个几何原语,我们从几何数据中随机选择一些最小子集。对每一种情况的代价函数进行评估,并从几何数据中提取由代价函数的最佳值子集定义的原语。为了提取多个原语,在不属于该原语的几何数据上重复此过程。所得到的提取算法可用于各种几何原语和几何数据。它很容易并行化,我们描述了在各种并行体系结构上的一些可能实现。我们与霍夫变换进行了详细的比较,并表明它比这种经典技术有许多优点。
{"title":"Extracting Geometric Primitives","authors":"Roth G.,&nbsp;Levine M.D.","doi":"10.1006/ciun.1993.1028","DOIUrl":"10.1006/ciun.1993.1028","url":null,"abstract":"<div><p>Extracting geometric primitives is an important task in model-based computer vision. The Hough transform is the most common method of extracting geometric primitives. Recently, methods derived from the field of robust statistics have been used for this purpose. We show that extracting a single geometric primitive is equivalent to finding the optimum value of a cost function which has potentially many local minima. Besides providing a unifying way of understanding different primitive extraction algorithms, this model also shows that for efficient extraction the true global minimum must be found with as few evaluations of the cost function as possible. In order to extract a single geometric primitive we choose a number of minimal subsets randomly from the geometric data. The cost function is evaluated for each of these, and the primitive defined by the subset with the best value of the cost function is extracted from the geometric data. To extract multiple primitives, this process is repeated on the geometric data that do not belong to the primitive. The resulting extraction algorithm can be used with a wide variety of geometric primitives and geometric data. It is easily parallelized, and we describe some possible implementations on a variety of parallel architectures. We make a detailed comparison with the Hough transform and show that it has a number of advantages over this classic technique.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 1-22"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76948313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 196
Invariant Signatures for Planar Shape Recognition under Partial Occlusion 局部遮挡下平面形状识别的不变性特征
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1031
Bruckstein A.M., Holt R.J., Netravali A.N., Richardson T.J.

A planar shape distorted by a projective viewing transformation can be recognized under partial occlusion if an invariant description of its boundary is available. Invariant boundary descriptions should be based solely on the local properties of the boundary curve, perhaps relying on further information on the viewing transformation. Recent research in this area has provided a theory for invariant boundary descriptions based on an interplay of differential, local, and global invariants. Differential invariants require high-order derivatives. However, the use of global invariances and point match information on the distorting transformations enables the derivation of invariant signatures for planar shapes using lower order derivatives. Trade-offs between the highest order derivatives required and the quantity of additional information constraining the distorting viewing transformations are made explicit. Once an invariant is established, recognition of the equivalence of two objects requires only partial function matching. Uses of these invariants include the identification of planar surfaces in varying orientations and resolving the outline of a cluster for planar objects into individual components.

在局部遮挡条件下,通过投影观测变换而变形的平面形状,只要有其边界的不变描述,就可以被识别出来。不变边界描述应该完全基于边界曲线的局部属性,可能依赖于观察变换的进一步信息。该领域的最新研究提供了一种基于微分不变量、局部不变量和全局不变量相互作用的不变边界描述理论。微分不变量需要高阶导数。然而,在扭曲变换上使用全局不变性和点匹配信息,可以使用低阶导数推导平面形状的不变性特征。在所需的最高阶导数和约束扭曲观看转换的附加信息数量之间进行了明确的权衡。一旦建立了不变量,识别两个对象的等价性只需要部分函数匹配。这些不变量的用途包括识别不同方向的平面,以及将平面对象集群的轮廓分解为单个组件。
{"title":"Invariant Signatures for Planar Shape Recognition under Partial Occlusion","authors":"Bruckstein A.M.,&nbsp;Holt R.J.,&nbsp;Netravali A.N.,&nbsp;Richardson T.J.","doi":"10.1006/ciun.1993.1031","DOIUrl":"https://doi.org/10.1006/ciun.1993.1031","url":null,"abstract":"<div><p>A planar shape distorted by a projective viewing transformation can be recognized under partial occlusion if an invariant description of its boundary is available. Invariant boundary descriptions should be based solely on the local properties of the boundary curve, perhaps relying on further information on the viewing transformation. Recent research in this area has provided a theory for invariant boundary descriptions based on an interplay of differential, local, and global invariants. Differential invariants require high-order derivatives. However, the use of global invariances and point match information on the distorting transformations enables the derivation of invariant signatures for planar shapes using lower order derivatives. Trade-offs between the highest order derivatives required and the quantity of additional information constraining the distorting viewing transformations are made explicit. Once an invariant is established, recognition of the equivalence of two objects requires only partial function matching. Uses of these invariants include the identification of planar surfaces in varying orientations and resolving the outline of a cluster for planar objects into individual components.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 49-65"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92057304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projective Pose Estimation of Linear and Quadratic Primitives in Monocular Computer Vision 单目计算机视觉中线性和二次基元的投影姿态估计
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1032
Ferri M., Mangili F., Viano G.

In this paper the relevance of perspective geometry for 3D scene analysis from a single view is asserted. Analytic procedures for perspective inversion of special primitive configurations are presented. Four configurations are treated: (1) four coplanar segments; (2) three orthogonal segments; (3) a circle arc; (4) a quadric of revolution. A complete and thorough illustration of the developed methodologies is given. The importance of the selected primitives is illustrated in different application contexts. Experimental results on real images are provided for configurations (3) and (4).

本文论述了透视几何在单视角三维场景分析中的重要性。给出了特殊基本构型透视反演的解析方法。处理四种构型:(1)四个共面段;(2)三个正交段;(3)圆弧;(4)二次公转。给出了所开发的方法的完整和彻底的说明。在不同的应用程序上下文中说明了所选原语的重要性。给出了构型(3)和(4)在实像上的实验结果。
{"title":"Projective Pose Estimation of Linear and Quadratic Primitives in Monocular Computer Vision","authors":"Ferri M.,&nbsp;Mangili F.,&nbsp;Viano G.","doi":"10.1006/ciun.1993.1032","DOIUrl":"10.1006/ciun.1993.1032","url":null,"abstract":"<div><p>In this paper the relevance of perspective geometry for 3D scene analysis from a single view is asserted. Analytic procedures for perspective inversion of special primitive configurations are presented. Four configurations are treated: (1) four coplanar segments; (2) three orthogonal segments; (3) a circle arc; (4) a quadric of revolution. A complete and thorough illustration of the developed methodologies is given. The importance of the selected primitives is illustrated in different application contexts. Experimental results on real images are provided for configurations (3) and (4).</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 66-84"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1032","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73106186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
CAD-Based Vision: Object Recognition in Cluttered Range Images Using Recognition Strategies 基于cad的视觉:基于识别策略的杂乱距离图像中的目标识别
Pub Date : 1993-07-01 DOI: 10.1006/ciun.1993.1030
Arman F., Aggarwal J.K.

This paper addresses the problem of recognizing an object in a given scene using a three-dimensional model of the object. The scene may contain several overlapping objects, arbitrarily positioned and oriented. A laser range scanner is used to collect three-dimensional (3D) data points from the scene. The collected data is segmented into surface patches, and the segments are used to calculate various 3D surface properties. The CAD models are designed using commercially available CADKEY and accessed via the industry standard IGES. The models are analyzed off-line to derive various geometric features, their relationships, and their attributes. A strategy for identifying each model is then automatically generated and stored. The strategy is applied at run-time to complete the task of object recognition. The goal of the generated strategy is to select the model′s geometric features in the sequence which may best be used to identify and locate the model in the scene. The generated strategy is guided by several factors, such as the visibility, detectability, the frequency of occurrence, and the topology of the features. The paper concludes with examples of the generated strategies and their application to object recognition in several scenes containing multiple objects.

本文解决了在给定场景中使用物体的三维模型来识别物体的问题。场景可能包含多个重叠的对象,任意定位和定向。激光距离扫描仪用于从现场收集三维(3D)数据点。采集到的数据被分割成表面小块,这些小块用于计算各种三维表面属性。CAD模型使用商用CADKEY设计,并通过工业标准IGES访问。对模型进行离线分析,得出各种几何特征、几何特征之间的关系及其属性。然后自动生成并存储用于识别每个模型的策略。在运行时应用该策略来完成目标识别任务。生成策略的目标是在序列中选择模型的几何特征,这些特征可能最适合用于识别和定位场景中的模型。生成的策略由几个因素指导,例如可见性、可检测性、发生频率和特征的拓扑结构。最后给出了生成策略的实例及其在包含多个目标的场景中的目标识别应用。
{"title":"CAD-Based Vision: Object Recognition in Cluttered Range Images Using Recognition Strategies","authors":"Arman F.,&nbsp;Aggarwal J.K.","doi":"10.1006/ciun.1993.1030","DOIUrl":"10.1006/ciun.1993.1030","url":null,"abstract":"<div><p>This paper addresses the problem of recognizing an object in a given scene using a three-dimensional model of the object. The scene may contain several overlapping objects, arbitrarily positioned and oriented. A laser range scanner is used to collect three-dimensional (3D) data points from the scene. The collected data is segmented into surface patches, and the segments are used to calculate various 3D surface properties. The CAD models are designed using commercially available CADKEY and accessed via the industry standard IGES. The models are analyzed off-line to derive various geometric features, their relationships, and their attributes. A strategy for identifying each model is then automatically generated and stored. The strategy is applied at run-time to complete the task of object recognition. The goal of the generated strategy is to select the model′s geometric features in the sequence which may best be used to identify and locate the model in the scene. The generated strategy is guided by several factors, such as the visibility, detectability, the frequency of occurrence, and the topology of the features. The paper concludes with examples of the generated strategies and their application to object recognition in several scenes containing multiple objects.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"58 1","pages":"Pages 33-48"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88441972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Sparse, Opaque Three-Dimensional Texture 1. Arborescent Patterns 稀疏,不透明的三维纹理树木状的模式
Pub Date : 1993-05-01 DOI: 10.1006/ciun.1993.1026
Waksman A., Rosenfeld A.

Plants such as trees can be modeled by three-dimensional hierarchial branching structures. If these structures are sufficiently sparse, so that self-occulation is relatively minor, their geometrical properties can be recovered from a single image.

像树这样的植物可以用三维层次分支结构来建模。如果这些结构足够稀疏,使自掩相对较小,则可以从单个图像中恢复其几何特性。
{"title":"Sparse, Opaque Three-Dimensional Texture 1. Arborescent Patterns","authors":"Waksman A.,&nbsp;Rosenfeld A.","doi":"10.1006/ciun.1993.1026","DOIUrl":"10.1006/ciun.1993.1026","url":null,"abstract":"<div><p>Plants such as trees can be modeled by three-dimensional hierarchial branching structures. If these structures are sufficiently sparse, so that self-occulation is relatively minor, their geometrical properties can be recovered from a single image.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"57 3","pages":"Pages 388-399"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80639429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Finding and Recovering SHGC Objects in an Edge Image 边缘图像中SHGC对象的查找与恢复
Pub Date : 1993-05-01 DOI: 10.1006/ciun.1993.1023
Sato H., Binford T.O.

A set of modules to extract partial descriptions of SHGC objects in an edge image is presented. It consists of modules to find end edges, to find meridian edges, to find cross-section edges, and to recover 3D shapes. The first goal of the system is to extract geometrical edges derived from an SHGC object. From an input edge image, pairs of end edges are detected first by verifying strong geometrical constraints for the ends of an SHGC. Then, meridian edges are detected by using the constraint for tangent intersections and the ones related to the end edges. The second goal is to recover 3D information of the object. The axis of SHGC and the axes of skewed symmetry in cross-section edges are detected. Then, original cross section and the sweeping rule are recovered by utilizing these three orthogonal axes. Extracted geometrical edges and 3D information from real images are shown.

提出了一套提取边缘图像中SHGC目标部分描述的模块。它由查找端边、查找子午边、查找横截面边和恢复三维形状等模块组成。该系统的第一个目标是从SHGC对象中提取几何边缘。从输入边缘图像中,首先通过验证SHGC末端的强几何约束来检测端边缘对。然后,利用切线交点约束和端点交点约束检测子午边;第二个目标是恢复物体的三维信息。检测了横截面边缘上的SHGC轴线和歪斜对称轴线。然后利用这三个正交轴恢复原截面和扫掠规律。显示了从真实图像中提取的几何边缘和三维信息。
{"title":"Finding and Recovering SHGC Objects in an Edge Image","authors":"Sato H.,&nbsp;Binford T.O.","doi":"10.1006/ciun.1993.1023","DOIUrl":"10.1006/ciun.1993.1023","url":null,"abstract":"<div><p>A set of modules to extract partial descriptions of SHGC objects in an edge image is presented. It consists of modules to find end edges, to find meridian edges, to find cross-section edges, and to recover 3D shapes. The first goal of the system is to extract geometrical edges derived from an SHGC object. From an input edge image, pairs of end edges are detected first by verifying strong geometrical constraints for the ends of an SHGC. Then, meridian edges are detected by using the constraint for tangent intersections and the ones related to the end edges. The second goal is to recover 3D information of the object. The axis of SHGC and the axes of skewed symmetry in cross-section edges are detected. Then, original cross section and the sweeping rule are recovered by utilizing these three orthogonal axes. Extracted geometrical edges and 3D information from real images are shown.</p></div>","PeriodicalId":100350,"journal":{"name":"CVGIP: Image Understanding","volume":"57 3","pages":"Pages 346-358"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/ciun.1993.1023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78804860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
期刊
CVGIP: Image Understanding
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1