首页 > 最新文献

Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)最新文献

英文 中文
Modeling geometric structure and illumination variation of a scene from real images 根据真实图像对场景的几何结构和光照变化进行建模
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710845
Zhengyou Zhang
We present in this paper a system which automatically builds, from real images, a scene model containing both 3D geometric information of the scene structure and its photometric information under various illumination conditions. The geometric structure is recovered from images taken from distinct viewpoints. Structure-from-motion and correlation-based stereo techniques are used to match pixels between images of different viewpoints and to reconstruct the scene in 3D space. The photometric property is extracted from images taken under different illumination conditions (orientation, position and intensity of the light sources). This is achieved by computing a low-dimensional linear space of the spatio-illumination volume, and is represented by a set of basis images. The model that has been built can be used to create realistic renderings from different viewpoints and illumination conditions. Applications include object recognition, virtual reality and product advertisement.
本文提出了一种基于真实图像的场景模型自动构建系统,该系统包含了场景结构的三维几何信息和不同光照条件下的场景光度信息。几何结构是从不同视点拍摄的图像中恢复的。基于运动结构和相关的立体技术用于在不同视点的图像之间匹配像素,并在三维空间中重建场景。从不同照明条件(光源的方向、位置和强度)下拍摄的图像中提取光度特性。这是通过计算空间照明体的低维线性空间来实现的,并由一组基图像表示。已经建立的模型可以用来从不同的视点和照明条件创建逼真的渲染。应用领域包括物体识别、虚拟现实和产品广告。
{"title":"Modeling geometric structure and illumination variation of a scene from real images","authors":"Zhengyou Zhang","doi":"10.1109/ICCV.1998.710845","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710845","url":null,"abstract":"We present in this paper a system which automatically builds, from real images, a scene model containing both 3D geometric information of the scene structure and its photometric information under various illumination conditions. The geometric structure is recovered from images taken from distinct viewpoints. Structure-from-motion and correlation-based stereo techniques are used to match pixels between images of different viewpoints and to reconstruct the scene in 3D space. The photometric property is extracted from images taken under different illumination conditions (orientation, position and intensity of the light sources). This is achieved by computing a low-dimensional linear space of the spatio-illumination volume, and is represented by a set of basis images. The model that has been built can be used to create realistic renderings from different viewpoints and illumination conditions. Applications include object recognition, virtual reality and product advertisement.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"19 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121010786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Egomotion estimation using log-polar images 基于对数极坐标图像的自运动估计
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710833
C. Silva, J. Santos-Victor
We address the problem of egomotion estimation of a monocular observer moving with arbitrary translation and rotation in an unknown environment, using log-polar images. The method we propose is uniquely based on the spatio-temporal image derivatives, or the normal flow. Thus, we avoid computing the complete optical flow field, which is an ill-posed problem due to the aperture problem. We use a search paradigm based on geometric properties of the normal flow field, and consider a family of search subspaces to estimate the egomotion parameters. These algorithms are particularly well-suited for the log-polar image geometry, as we use a selection of special normal flow, vectors with simple representation in log-polar coordinates. This approach highlights the close coupling between algorithmic aspects and the sensor geometry (retina physiology), often, found in nature. Finally, we present and discuss a set of experiments, for various kinds of camera motions, which show encouraging results.
我们使用对数极图像解决了在未知环境中任意平移和旋转的单目观察者的自运动估计问题。我们提出的方法是基于时空图像导数或正常流的独特方法。因此,我们避免了计算完整的光流场,这是由于孔径问题造成的不适定问题。我们使用了一种基于法向流场几何特性的搜索范式,并考虑了一组搜索子空间来估计自运动参数。这些算法特别适合于对数极图像几何,因为我们使用了一系列特殊的法向流,在对数极坐标中具有简单表示的向量。这种方法强调了算法方面与传感器几何(视网膜生理学)之间的紧密耦合,通常在自然界中发现。最后,我们提出并讨论了一组针对各种摄像机运动的实验,结果令人鼓舞。
{"title":"Egomotion estimation using log-polar images","authors":"C. Silva, J. Santos-Victor","doi":"10.1109/ICCV.1998.710833","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710833","url":null,"abstract":"We address the problem of egomotion estimation of a monocular observer moving with arbitrary translation and rotation in an unknown environment, using log-polar images. The method we propose is uniquely based on the spatio-temporal image derivatives, or the normal flow. Thus, we avoid computing the complete optical flow field, which is an ill-posed problem due to the aperture problem. We use a search paradigm based on geometric properties of the normal flow field, and consider a family of search subspaces to estimate the egomotion parameters. These algorithms are particularly well-suited for the log-polar image geometry, as we use a selection of special normal flow, vectors with simple representation in log-polar coordinates. This approach highlights the close coupling between algorithmic aspects and the sensor geometry (retina physiology), often, found in nature. Finally, we present and discuss a set of experiments, for various kinds of camera motions, which show encouraging results.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125732390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fast stereovision with subpixel-precision 亚像素级精度的快速立体视觉
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710842
R. Henkel
A fast stereo algorithm based on aliasing effects of simple disparity estimators within a coherence detection scheme is presented. The algorithm calculates dense disparity maps with subpixel-precision by performing local spatial filter operations and simple arithmetic transformations. Performance similar to classical area-based approaches is achieved, but without the complicated hierarchical search structure typical for these approaches. The algorithm is completely parallel; the disparity valves are calculated independently for each pixel. In addition, local validation counts for the disparity estimates and a fused cyclopean view of the scene are available within the proposed network structure for coherence-based stereo.
在相干检测方案中,提出了一种基于简单视差估计的混叠效应的快速立体算法。该算法通过局部空间滤波运算和简单的算术变换,以亚像素级精度计算密集视差图。实现了与经典的基于区域的方法相似的性能,但没有这些方法典型的复杂的分层搜索结构。该算法是完全并行的;对于每个像素,视差阀是独立计算的。此外,在提出的基于相干的立体网络结构中,可以使用视差估计的局部验证计数和融合的场景全景视图。
{"title":"Fast stereovision with subpixel-precision","authors":"R. Henkel","doi":"10.1109/ICCV.1998.710842","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710842","url":null,"abstract":"A fast stereo algorithm based on aliasing effects of simple disparity estimators within a coherence detection scheme is presented. The algorithm calculates dense disparity maps with subpixel-precision by performing local spatial filter operations and simple arithmetic transformations. Performance similar to classical area-based approaches is achieved, but without the complicated hierarchical search structure typical for these approaches. The algorithm is completely parallel; the disparity valves are calculated independently for each pixel. In addition, local validation counts for the disparity estimates and a fused cyclopean view of the scene are available within the proposed network structure for coherence-based stereo.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Construction and refinement of panoramic mosaics with global and local alignment 具有全局和局部对齐的全景马赛克的构建和改进
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710831
H. Shum, R. Szeliski
This paper presents techniques for constructing full view panoramic mosaics form sequences of images. Our representation associates a rotation matrix (and optionally a focal length) with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In order to reduce accumulated registration errors we apply global alignment (block adjustment) to whole sequence of images, which results in an optimal image mosaic (in the least squares sense). To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions we develop a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment we significantly improve the quality of our image mosaics thereby enabling the creation of full view panoramic mosaics with hand-held cameras.
本文介绍了从图像序列中构造全视图全景马赛克的技术。我们的表示与每个输入图像关联一个旋转矩阵(可选的焦距),而不是明确地将所有图像投影到一个共同的表面上(例如,圆柱体)。为了减少累积的配准误差,我们对整个图像序列进行全局对齐(块调整),从而得到最优的图像拼接(最小二乘意义上的)。为了补偿由相机平移和其他未建模的扭曲引入的少量运动视差,我们开发了一种局部对齐(去重影)技术,该技术根据成对的局部图像配准结果扭曲每个图像。通过结合全局和局部对齐,我们显着提高了图像马赛克的质量,从而可以使用手持相机创建全视图全景马赛克。
{"title":"Construction and refinement of panoramic mosaics with global and local alignment","authors":"H. Shum, R. Szeliski","doi":"10.1109/ICCV.1998.710831","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710831","url":null,"abstract":"This paper presents techniques for constructing full view panoramic mosaics form sequences of images. Our representation associates a rotation matrix (and optionally a focal length) with each input image, rather than explicitly projecting all of the images onto a common surface (e.g., a cylinder). In order to reduce accumulated registration errors we apply global alignment (block adjustment) to whole sequence of images, which results in an optimal image mosaic (in the least squares sense). To compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions we develop a local alignment (deghosting) technique which warps each image based on the results of pairwise local image registrations. By combining both global and local alignment we significantly improve the quality of our image mosaics thereby enabling the creation of full view panoramic mosaics with hand-held cameras.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131459357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 199
Segmenting cortical gray matter for functional MRI visualization 脑皮层灰质分割用于功能性MRI可视化
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710733
P. Teo, G. Sapiro, B. Wandell
We describe a system that is being used to segment gray matter and create connected cortical representations from MRI. The method exploits knowledge of the anatomy of the cortex and incorporates structural constraints into the segmentation. First, the white matter and CSF regions in the MR volume are segmented using some novel techniques of posterior anisotropic diffusion. Then, the user selects the cortical white matter component of interest, and its structure is verified by checking for cavities and handles. After this, a connected representation of the gray matter is created by a constrained growing-out from the white matter boundary. Because the connectivity is computed, the segmentation can be used as input to several methods of visualizing the spatial pattern of cortical activity within gray matter. In our case, the connected representation of gray matter is used to create a representation of the flattened cortex. Then, fMRI measurements are overlaid on the flattened representation, yielding a representation of the volumetric data within a single image.
我们描述了一个系统,被用来分割灰质和创建连接皮层表征从MRI。该方法利用大脑皮层的解剖学知识,并将结构约束纳入分割。首先,使用一些新颖的后向异性扩散技术对MR体积中的白质和脑脊液区域进行分割。然后,用户选择感兴趣的皮层白质成分,通过检查空腔和手柄来验证其结构。在此之后,灰质的连接表示是由白质边界的约束生长产生的。由于连通性是计算出来的,分割结果可以作为多种方法的输入,用于可视化灰质内皮层活动的空间模式。在我们的例子中,灰质的连接表示用于创建扁平皮层的表示。然后,fMRI测量值被覆盖在扁平的表示上,在单个图像中产生体积数据的表示。
{"title":"Segmenting cortical gray matter for functional MRI visualization","authors":"P. Teo, G. Sapiro, B. Wandell","doi":"10.1109/ICCV.1998.710733","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710733","url":null,"abstract":"We describe a system that is being used to segment gray matter and create connected cortical representations from MRI. The method exploits knowledge of the anatomy of the cortex and incorporates structural constraints into the segmentation. First, the white matter and CSF regions in the MR volume are segmented using some novel techniques of posterior anisotropic diffusion. Then, the user selects the cortical white matter component of interest, and its structure is verified by checking for cavities and handles. After this, a connected representation of the gray matter is created by a constrained growing-out from the white matter boundary. Because the connectivity is computed, the segmentation can be used as input to several methods of visualizing the spatial pattern of cortical activity within gray matter. In our case, the connected representation of gray matter is used to create a representation of the flattened cortex. Then, fMRI measurements are overlaid on the flattened representation, yielding a representation of the volumetric data within a single image.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126288086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robust multi-sensor image alignment 鲁棒多传感器图像对齐
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710832
M. Irani, P. Anandan
This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.
本文提出了一种由不同模态(例如,EO和IR)传感器获得的图像对齐方法。本文有两个主要贡献:(i)它确定了一种合适的图像表示,用于多传感器校准,即强调两个多传感器图像之间的共同信息,抑制非共同信息,并且足以进行粗到精处理的表示。(ii)提出了一种新的对齐技术,将全局估计应用于任何局部相似性度量的选择。特别是,当将该配准技术应用于具有局部归一化相关相似性度量的选定图像表示时,它提供了一种新的多传感器对准算法,该算法对异常值具有鲁棒性,并且适用于两幅图像之间的各种全局复杂亮度变换。我们提出的图像表示不依赖于稀疏图像特征(例如,边缘、轮廓或点特征)。它是连续的,不能消除局部图像区域内的细节变化。我们的方法自然地扩展到粗到精的处理,甚至适用于多传感器信号具有低统计相关性的全局特征的情况。
{"title":"Robust multi-sensor image alignment","authors":"M. Irani, P. Anandan","doi":"10.1109/ICCV.1998.710832","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710832","url":null,"abstract":"This paper presents a method for alignment of images acquired by sensors of different modalities (e.g., EO and IR). The paper has two main contributions: (i) It identifies an appropriate image representation, for multi-sensor alignment, i.e., a representation which emphasizes the common information between the two multi-sensor images, suppresses the non-common information, and is adequate for coarse-to-fine processing. (ii) It presents a new alignment technique which applies global estimation to any choice of a local similarity measure. In particular, it is shown that when this registration technique is applied to the chosen image representation with a local normalized-correlation similarity measure, it provides a new multi-sensor alignment algorithm which is robust to outliers, and applies to a wide variety of globally complex brightness transformations between the two images. Our proposed image representation does not rely on sparse image features (e.g., edge, contour, or point features). It is continuous and does not eliminate the detailed variations within local image regions. Our method naturally extends to coarse-to-fine processing, and applies even in situations when the multi-sensor signals are globally characterized by low statistical correlation.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131945188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 267
Mixtures of eigenfeatures for real-time structure from texture 基于纹理的实时结构特征混合
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710710
T. Jebara, Kenneth B. Russell, A. Pentland
We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.
我们描述了一个人脸建模系统,该系统可以从实时视频流中估计完整的面部结构和纹理。该系统首先采用面部交易算法,该算法检测并稳定实时面部图像,使其形成标准的3D姿势。然后通过统计模型处理生成的规范纹理,以过滤缺陷并估计未知成分,如缺失像素和底层3D结构。该统计模型是特征选择器的软混合,它跨越了激光扫描人脸训练集的三维变形和纹理变化。引入了一种迭代算法来确定特征的维度划分,以最大化其在交叉验证数据集上的泛化能力。然后在不完整的3D数据上演示该模型过滤和估计缺失面部成分的能力。这最终允许模型跨越已知和回归未知的面部信息,前稳定的自然视频序列由人脸跟踪算法生成。在视频序列上对模型参数的连续和动态估计生成了面部3D变形和纹理变化的紧凑时间描述。
{"title":"Mixtures of eigenfeatures for real-time structure from texture","authors":"T. Jebara, Kenneth B. Russell, A. Pentland","doi":"10.1109/ICCV.1998.710710","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710710","url":null,"abstract":"We describe a face modeling system which estimates complete facial structure and texture from a real-time video stream. The system begins with a face trading algorithm which detects and stabilizes live facial images into a canonical 3D pose. The resulting canonical texture is then processed by a statistical model to filter imperfections and estimate unknown components such as missing pixels and underlying 3D structure. This statistical model is a soft mixture of eigenfeature selectors which span the 3D deformations and texture changes across a training set of laser scanned faces. An iterative algorithm is introduced for determining the dimensional partitioning of the eigenfeatures to maximize their generalization capability over a cross-validation set of data. The model's abilities to filter and estimate absent facial components are then demonstrated over incomplete 3D data. This ultimately allows the model to span known and regress unknown facial information front stabilized natural video sequences generated by a face tracking algorithm. The resulting continuous and dynamic estimation of the model's parameters over a video sequence generates a compact temporal description of the 3D deformations and texture changes of the face.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131848398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
A real-time algorithm for medical shape recovery 医学形状恢复的实时算法
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710735
R. Malladi, J. Sethian
In this paper, we present a shape recovery technique in 2D and 3D with specific applications in visualizing and measuring anatomical shapes from medical images. This algorithm models extremely corrugated structures like the brain, is topologically adaptable, is robust, and runs in O(N log N) time where N is the total number of points in the domain. Our two-stage technique is based on the level set shape recovery scheme and the fast marching method for computing solutions to static Hamilton-Jacobi equations.
在本文中,我们提出了一种二维和三维形状恢复技术,并具体应用于医学图像的可视化和测量解剖形状。该算法模拟了像大脑这样的波纹结构,具有拓扑适应性,鲁棒性,运行时间为O(N log N),其中N为域内点的总数。我们的两阶段技术是基于水平集形状恢复方案和计算静态Hamilton-Jacobi方程解的快速推进方法。
{"title":"A real-time algorithm for medical shape recovery","authors":"R. Malladi, J. Sethian","doi":"10.1109/ICCV.1998.710735","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710735","url":null,"abstract":"In this paper, we present a shape recovery technique in 2D and 3D with specific applications in visualizing and measuring anatomical shapes from medical images. This algorithm models extremely corrugated structures like the brain, is topologically adaptable, is robust, and runs in O(N log N) time where N is the total number of points in the domain. Our two-stage technique is based on the level set shape recovery scheme and the fast marching method for computing solutions to static Hamilton-Jacobi equations.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 155
PIMs and invariant parts for shape recognition 形状识别中的pim和不变部件
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710813
Zhibin Lei, T. Tasdizen, D. Cooper
We present completely new very powerful solutions to two fundamental problems central to computer vision. Given data sets representing C objects to be stored in a database, and given a new data set for an object, determine the object in the database that is most like the object measured. We solve this problem through use of PIMs ("Polynomial Interpolated Measures"), which is a new representation integrating implicit polynomial curves and surfaces, explicit polynomials, and discrete data sets which may be sparse. The method provides high accuracy at low computational cost. 2. Given noisy 2D data along a curve (or 3D data along a surface), decompose the data into patches such that new data taken along affine transformations or Euclidean transformations of the curve (or surface) can be decomposed into corresponding patches. Then recognition of complex or partially occluded objects can be done in terms of invariantly determined patches. We briefly outline a low computational cost image-database indexing-system based on this representation for objects having complex shape-geometry.
我们为计算机视觉的两个核心基本问题提出了全新的、非常强大的解决方案。给定表示要存储在数据库中的C对象的数据集,以及给定对象的新数据集,确定数据库中与测量对象最相似的对象。我们通过使用多项式插值测度(Polynomial Interpolated Measures)来解决这个问题,它是一种新的表示,将隐式多项式曲线和曲面、显式多项式和离散数据集(可能是稀疏的)集成在一起。该方法以较低的计算成本提供了较高的精度。2. 给定曲线上有噪声的2D数据(或曲面上有噪声的3D数据),将数据分解成小块,这样,沿着曲线(或曲面)的仿射变换或欧几里得变换获得的新数据就可以分解成相应的小块。然后,可以根据不变确定的斑块来识别复杂或部分遮挡的物体。我们简要地概述了一个基于这种表示的低计算成本的图像数据库索引系统,用于具有复杂几何形状的对象。
{"title":"PIMs and invariant parts for shape recognition","authors":"Zhibin Lei, T. Tasdizen, D. Cooper","doi":"10.1109/ICCV.1998.710813","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710813","url":null,"abstract":"We present completely new very powerful solutions to two fundamental problems central to computer vision. Given data sets representing C objects to be stored in a database, and given a new data set for an object, determine the object in the database that is most like the object measured. We solve this problem through use of PIMs (\"Polynomial Interpolated Measures\"), which is a new representation integrating implicit polynomial curves and surfaces, explicit polynomials, and discrete data sets which may be sparse. The method provides high accuracy at low computational cost. 2. Given noisy 2D data along a curve (or 3D data along a surface), decompose the data into patches such that new data taken along affine transformations or Euclidean transformations of the curve (or surface) can be decomposed into corresponding patches. Then recognition of complex or partially occluded objects can be done in terms of invariantly determined patches. We briefly outline a low computational cost image-database indexing-system based on this representation for objects having complex shape-geometry.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131244309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Affine reconstruction of curved surfaces from uncalibrated views of apparent contours 从未校准的视轮廓视图重建曲面的仿射
Pub Date : 1998-01-04 DOI: 10.1109/ICCV.1998.710796
J. Sato, R. Cipolla
In this paper, we show that even if the camera is uncalibrated, and its translational motion is unknown, curved surfaces can be reconstructed from their apparent contours up to a 3D affine ambiguity. Furthermore, we show that even if the reconstruction is nonmetric (non-Euclidean), we can still extract useful information for many computer vision applications just from the apparent contours. We first show that if the camera undergoes pure translation (unknown direction and magnitude), the epipolar geometry can be recovered from the apparent contours without using any search or optimisation process. The extracted epipolar geometry is next used for reconstructing curved surfaces from the deformations of the apparent contours viewed from uncalibrated cameras. The result is applied to distinguishing curved surfaces from fixed features in images. It is also shown that the time-to-contact to the curved surfaces can be computed from simple measurements of the apparent contours. The proposed method is implemented and tested on real images of curved surfaces.
在本文中,我们证明了即使相机未校准,其平移运动是未知的,曲面也可以从其表观轮廓重建到三维仿射模糊。此外,我们表明,即使重建是非度量的(非欧几里得),我们仍然可以从明显轮廓中提取有用的信息,用于许多计算机视觉应用。我们首先表明,如果相机经过纯平移(未知方向和大小),可以从表观轮廓中恢复极几何形状,而无需使用任何搜索或优化过程。提取的极外几何形状随后用于从未校准的相机观察到的表观轮廓变形中重建曲面。该结果被用于识别图像中的曲面和固定特征。结果表明,曲面的接触时间可以由表面轮廓的简单测量来计算。该方法在实际曲面图像上进行了实现和测试。
{"title":"Affine reconstruction of curved surfaces from uncalibrated views of apparent contours","authors":"J. Sato, R. Cipolla","doi":"10.1109/ICCV.1998.710796","DOIUrl":"https://doi.org/10.1109/ICCV.1998.710796","url":null,"abstract":"In this paper, we show that even if the camera is uncalibrated, and its translational motion is unknown, curved surfaces can be reconstructed from their apparent contours up to a 3D affine ambiguity. Furthermore, we show that even if the reconstruction is nonmetric (non-Euclidean), we can still extract useful information for many computer vision applications just from the apparent contours. We first show that if the camera undergoes pure translation (unknown direction and magnitude), the epipolar geometry can be recovered from the apparent contours without using any search or optimisation process. The extracted epipolar geometry is next used for reconstructing curved surfaces from the deformations of the apparent contours viewed from uncalibrated cameras. The result is applied to distinguishing curved surfaces from fixed features in images. It is also shown that the time-to-contact to the curved surfaces can be computed from simple measurements of the apparent contours. The proposed method is implemented and tested on real images of curved surfaces.","PeriodicalId":270671,"journal":{"name":"Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133893662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1