首页 > 最新文献

2003 Conference on Computer Vision and Pattern Recognition Workshop最新文献

英文 中文
Salient Features and Hypothesis Testing: evaluating a novel approach for segmentation and address block location 显著特征和假设检验:评估一种新的分割和地址块定位方法
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10022
D. Menotti, D. Borges, A. Britto
This paper presents a modification with further experiments of a segmentation algorithm based on feature selection in wavelet space of ours [9]. The aim is to automatically separate in postal envelopes the regions related to background, stamps, rubber stamps, and the address blocks. First, a typical image of a postal envelope is decomposed using Mallat algorithm and Haar basis. High frequency channel outputs are analyzed to locate salient points in order to separate the background. A statistical hypothesis test is taken to decide upon more consistent regions in order to clean out some noise left. The selected points are projected back to the original gray level image, where the evidence from the wavelet space is used to start a growing process to include the pixels more likely to belong to the regions of stamps, rubber stamps, and written area. We have modified the growing process controlled by the salient points and the results were greatly improved reaching success rate of over 97%. Experiments are run using original postal envelopes from the Brazilian Post Office Agency, and here we report results on 440 images with many different layouts and backgrounds.
本文通过进一步实验对我们的一种基于小波空间特征选择的分割算法进行了改进[9]。目的是在邮政信封中自动分离与背景、邮票、橡皮邮票和地址块相关的区域。首先,利用Mallat算法和Haar基对典型信封图像进行分解。对高频通道输出进行分析,定位突出点,分离背景。采用统计假设检验来确定更一致的区域,以清除遗留的一些噪声。选择的点被投影回原始灰度图像,在那里,小波空间的证据被用来开始一个增长过程,以包括更可能属于邮票、橡皮图章和书写区域的像素。我们修改了由凸点控制的生长过程,结果大大改善,成功率达到97%以上。实验使用来自巴西邮政机构的原始邮政信封,在这里我们报告了440张不同布局和背景的图像的结果。
{"title":"Salient Features and Hypothesis Testing: evaluating a novel approach for segmentation and address block location","authors":"D. Menotti, D. Borges, A. Britto","doi":"10.1109/CVPRW.2003.10022","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10022","url":null,"abstract":"This paper presents a modification with further experiments of a segmentation algorithm based on feature selection in wavelet space of ours [9]. The aim is to automatically separate in postal envelopes the regions related to background, stamps, rubber stamps, and the address blocks. First, a typical image of a postal envelope is decomposed using Mallat algorithm and Haar basis. High frequency channel outputs are analyzed to locate salient points in order to separate the background. A statistical hypothesis test is taken to decide upon more consistent regions in order to clean out some noise left. The selected points are projected back to the original gray level image, where the evidence from the wavelet space is used to start a growing process to include the pixels more likely to belong to the regions of stamps, rubber stamps, and written area. We have modified the growing process controlled by the salient points and the results were greatly improved reaching success rate of over 97%. Experiments are run using original postal envelopes from the Brazilian Post Office Agency, and here we report results on 440 images with many different layouts and backgrounds.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Rendering novel views from a set of omnidirectional mosaic images 从一组全方位的马赛克图像渲染新颖的观点
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10079
H. Bakstein, T. Pajdla
We present an approach to rendering stereo pairs of views from a set of omnidirectional mosaic images allowing arbitrary viewing direction and vergence angle of two eyes of a viewer. Moreover, we allow the viewer to move his head aside to see behind occluding objects. We propose a representation of the scene in a set of omnidirectional mosaic images composed from a sequence of images acquired by an omnidirectional camera equipped with a lens with a field of view of 183°. The proposed representation allows fast access to high resolution mosaic images and efficient representation in the memory. The proposed method can be applied in a representation of a real scene, where the viewer is supposed to stand at one spot and look around.
我们提出了一种从一组全向马赛克图像中渲染立体成对视图的方法,允许任意观看方向和观察者两只眼睛的会聚角度。此外,我们允许观众将他的头移到一边,以看到后面遮挡的物体。我们提出用一组全向拼接图像来表示场景,这些图像由配备有183°视场镜头的全向相机获得的一系列图像组成。所提出的表示可以快速访问高分辨率的拼接图像,并在存储器中有效表示。所提出的方法可以应用于真实场景的表示,其中观众应该站在一个点并环顾四周。
{"title":"Rendering novel views from a set of omnidirectional mosaic images","authors":"H. Bakstein, T. Pajdla","doi":"10.1109/CVPRW.2003.10079","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10079","url":null,"abstract":"We present an approach to rendering stereo pairs of views from a set of omnidirectional mosaic images allowing arbitrary viewing direction and vergence angle of two eyes of a viewer. Moreover, we allow the viewer to move his head aside to see behind occluding objects. We propose a representation of the scene in a set of omnidirectional mosaic images composed from a sequence of images acquired by an omnidirectional camera equipped with a lens with a field of view of 183°. The proposed representation allows fast access to high resolution mosaic images and efficient representation in the memory. The proposed method can be applied in a representation of a real scene, where the viewer is supposed to stand at one spot and look around.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130151825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Omnidirectional Egomotion Estimation From Back-projection Flow 基于反投影流的全向自我运动估计
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10074
O. Shakernia, R. Vidal, S. Sastry
The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.
目前全向相机自运动估计的最新进展是将光流映射到球体上,然后应用自运动算法进行球面投影。在本文中,我们建议将图像点反向投影到中央全景相机几何结构固有的虚拟弯曲视网膜上,并计算该视网膜上的光流:所谓的反向投影流。我们证明了众所周知的自我运动算法可以很容易地适应于反向投影流。我们提供了大量的仿真结果,表明在存在噪声的情况下,当摄像机平移在X-Y平面时,使用反向投影流的自运动算法表现更好。因此,该方法适用于地面机器人导航等没有z轴平移的应用。
{"title":"Omnidirectional Egomotion Estimation From Back-projection Flow","authors":"O. Shakernia, R. Vidal, S. Sastry","doi":"10.1109/CVPRW.2003.10074","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10074","url":null,"abstract":"The current state-of-the-art for egomotion estimation with omnidirectional cameras is to map the optical flow to the sphere and then apply egomotion algorithms for spherical projection. In this paper, we propose to back-project image points to a virtual curved retina that is intrinsic to the geometry of the central panoramic camera, and compute the optical flow on this retina: the so-called back-projection flow. We show that well-known egomotion algorithms can be easily adapted to work with the back-projection flow. We present extensive simulation results showing that in the presence of noise, egomotion algorithms perform better by using back-projection flow when the camera translation is in the X-Y plane. Thus, the proposed method is preferable in applications where there is no Z-axis translation, such as ground robot navigation.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132438175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Background Line Detection with A Stochastic Model 基于随机模型的背景线检测
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10029
Yefeng Zheng, Huiping Li, D. Doermann
Background lines often exist in textual documents. It is important to detect and remove those lines so text can be easily segmented and recognized. A stochastic model is proposed in this paper which incorporates the high level contextual information to detect severely broken lines. We observed that 1) background lines are parallel, and 2) the vertical gaps between any two neighboring lines are roughly equal with small variance. The novelty of our algorithm is we use a HMM model to model the projection profile along the estimated skew angle, and estimate the optimal positions of all background lines simultaneously based on the Viterbi algorithm. Compared with our previous deterministic model based approach [15], the new method is much more robust and detects about 96.8% background lines correctly in our Arabic document database.
文本文件中经常存在背景线。检测和删除这些行是很重要的,这样文本就可以很容易地分割和识别。本文提出了一种包含高级上下文信息的随机模型来检测严重折线。我们观察到1)背景线是平行的,2)任意两条相邻线之间的垂直间隙大致相等,方差很小。该算法的新颖之处在于利用HMM模型沿估计的倾斜角度对投影轮廓进行建模,并基于Viterbi算法同时估计所有背景线的最优位置。与我们之前基于确定性模型的方法[15]相比,新方法鲁棒性更强,在我们的阿拉伯语文档数据库中正确检测出96.8%的背景线。
{"title":"Background Line Detection with A Stochastic Model","authors":"Yefeng Zheng, Huiping Li, D. Doermann","doi":"10.1109/CVPRW.2003.10029","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10029","url":null,"abstract":"Background lines often exist in textual documents. It is important to detect and remove those lines so text can be easily segmented and recognized. A stochastic model is proposed in this paper which incorporates the high level contextual information to detect severely broken lines. We observed that 1) background lines are parallel, and 2) the vertical gaps between any two neighboring lines are roughly equal with small variance. The novelty of our algorithm is we use a HMM model to model the projection profile along the estimated skew angle, and estimate the optimal positions of all background lines simultaneously based on the Viterbi algorithm. Compared with our previous deterministic model based approach [15], the new method is much more robust and detects about 96.8% background lines correctly in our Arabic document database.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132954364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Optical flow estimation in omnidirectional images using wavelet approach 基于小波变换的全向图像光流估计
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10080
C. Demonceaux, D. Kachi-Akkouche
The motion estimation computation in the image sequences is a significant problem in image processing. Many researches were carried out on this subject in the image sequences with a traditional camera. These techniques were applied in omnidirectional image sequences. But the majority of these methods are not adapted to this kind of sequences. Indeed they suppose the flow is locally constant but the omnidirectional sensor generates distortions which contradict this assumption. In this paper, we propose a fast method to compute the optical flow in omnidirectional image sequences. This method is based on a Brightness Change Constraint Equation decomposition on a wavelet basis. To take account of the distortions created by the sensor, we replace the assumption of flow locally constant used in traditional images by a hypothesis more appropriate.
图像序列中的运动估计计算是图像处理中的一个重要问题。在传统摄像机的图像序列中,对这一问题进行了大量的研究。这些技术应用于全向图像序列。但是这些方法大多不适合这类序列。事实上,他们假设流量在局部是恒定的,但全向传感器产生的扭曲与这一假设相矛盾。本文提出了一种快速计算全向图像序列光流的方法。该方法基于小波分解的亮度变化约束方程。考虑到传感器产生的畸变,我们用一个更合适的假设取代了传统图像中使用的流量局部常数假设。
{"title":"Optical flow estimation in omnidirectional images using wavelet approach","authors":"C. Demonceaux, D. Kachi-Akkouche","doi":"10.1109/CVPRW.2003.10080","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10080","url":null,"abstract":"The motion estimation computation in the image sequences is a significant problem in image processing. Many researches were carried out on this subject in the image sequences with a traditional camera. These techniques were applied in omnidirectional image sequences. But the majority of these methods are not adapted to this kind of sequences. Indeed they suppose the flow is locally constant but the omnidirectional sensor generates distortions which contradict this assumption. In this paper, we propose a fast method to compute the optical flow in omnidirectional image sequences. This method is based on a Brightness Change Constraint Equation decomposition on a wavelet basis. To take account of the distortions created by the sensor, we replace the assumption of flow locally constant used in traditional images by a hypothesis more appropriate.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"37 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114037534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Noise Adaptive Channel Smoothing of Low-Dose Images 低剂量图像的噪声自适应信道平滑
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10018
H. Scharr, M. Felsberg, Per-Erik Forssén
Many nano-scale sensing techniques and image processing applications are characterized by noisy, or corrupted, image data. Unlike typical camera-based computer vision imagery where noise can be modeled quite well as additive, zero-mean white or Gaussian noise, nano-scale images suffer from low intensities and thus mainly from Poisson-like noise. In addition, noise distributions can not be considered symmetric due to the limited gray value range of sensors and resulting truncation of over- and underflows. In this paper we adapt B-spline channel smoothing to meet the requirements imposed by this noise characteristics. Like PDE-based diffusion schemes it has a close connection to robust statistics but, unlike diffusion schemes, it can handle non-zero-mean noises. In order to account for the multiplicative nature of Poisson noise the variance of the smoothing kernels applied to each channel is properly adapted. We demonstrate the properties of this technique on noisy nano-scale images of silicon structures and compare to anisotropic diffusion schemes that were specially adapted to this data.
许多纳米级传感技术和图像处理应用的特点是有噪声或损坏的图像数据。与典型的基于相机的计算机视觉图像不同,噪声可以很好地建模为加性、零均值白噪声或高斯噪声,纳米级图像的强度很低,因此主要来自泊松类噪声。此外,由于传感器的灰度值范围有限以及由此导致的过流和下流截断,噪声分布不能被认为是对称的。本文采用b样条通道平滑来满足这种噪声特性的要求。与基于pde的扩散方案一样,它与鲁棒统计密切相关,但与扩散方案不同的是,它可以处理非零均值噪声。为了考虑泊松噪声的乘法性质,对应用于每个通道的平滑核的方差进行了适当的调整。我们在硅结构的噪声纳米级图像上展示了该技术的特性,并与专门适用于该数据的各向异性扩散方案进行了比较。
{"title":"Noise Adaptive Channel Smoothing of Low-Dose Images","authors":"H. Scharr, M. Felsberg, Per-Erik Forssén","doi":"10.1109/CVPRW.2003.10018","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10018","url":null,"abstract":"Many nano-scale sensing techniques and image processing applications are characterized by noisy, or corrupted, image data. Unlike typical camera-based computer vision imagery where noise can be modeled quite well as additive, zero-mean white or Gaussian noise, nano-scale images suffer from low intensities and thus mainly from Poisson-like noise. In addition, noise distributions can not be considered symmetric due to the limited gray value range of sensors and resulting truncation of over- and underflows. In this paper we adapt B-spline channel smoothing to meet the requirements imposed by this noise characteristics. Like PDE-based diffusion schemes it has a close connection to robust statistics but, unlike diffusion schemes, it can handle non-zero-mean noises. In order to account for the multiplicative nature of Poisson noise the variance of the smoothing kernels applied to each channel is properly adapted. We demonstrate the properties of this technique on noisy nano-scale images of silicon structures and compare to anisotropic diffusion schemes that were specially adapted to this data.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123336163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A visual and interactive tool for optimizing lexical postcorrection of OCR results 一个可视化的交互式工具,用于优化OCR结果的词法后校正
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10031
Christian M. Strohmaier, Christoph Ringlstetter, K. Schulz, S. Mihov
Systems for postcorrection of OCR-results can be fine tuned and adapted to new recognition tasks in many respects. One issue is the selection and adaption of a suitable background dictionary. Another issue is the choice of a correction model, which includes, among other decisions, the selection of an appropriate distance measure for strings and the choice of a scoring function for ranking distinct correction alternatives. When combining the results obtained from distinct OCR engines, further parameters have to be fixed. Due to all these degrees of freedom, adaption and fine tuning of systems for lexical postcorrection is a difficult process. Here we describe a visual and interactive tool that semi-automates the generation of ground truth data, partially automates adjustment of parameters, yields active support for error analysis and thus helps to find correction strategies that lead to high accuracy with realistic effort.
ocr结果后校正系统可以在许多方面进行微调并适应新的识别任务。其中一个问题是选择和改编合适的背景词典。另一个问题是校正模型的选择,其中包括为字符串选择适当的距离度量,以及为不同的校正选择排序的评分函数的选择。当结合从不同OCR引擎获得的结果时,必须确定进一步的参数。由于所有这些自由度,词汇后校正系统的适应和微调是一个困难的过程。在这里,我们描述了一个可视化和交互式工具,它可以半自动化地生成真实数据,部分自动化地调整参数,为误差分析提供主动支持,从而有助于找到校正策略,从而通过实际的努力获得高精度。
{"title":"A visual and interactive tool for optimizing lexical postcorrection of OCR results","authors":"Christian M. Strohmaier, Christoph Ringlstetter, K. Schulz, S. Mihov","doi":"10.1109/CVPRW.2003.10031","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10031","url":null,"abstract":"Systems for postcorrection of OCR-results can be fine tuned and adapted to new recognition tasks in many respects. One issue is the selection and adaption of a suitable background dictionary. Another issue is the choice of a correction model, which includes, among other decisions, the selection of an appropriate distance measure for strings and the choice of a scoring function for ranking distinct correction alternatives. When combining the results obtained from distinct OCR engines, further parameters have to be fixed. Due to all these degrees of freedom, adaption and fine tuning of systems for lexical postcorrection is a difficult process. Here we describe a visual and interactive tool that semi-automates the generation of ground truth data, partially automates adjustment of parameters, yields active support for error analysis and thus helps to find correction strategies that lead to high accuracy with realistic effort.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126202276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Realistic Textures for Virtual Anastylosis 虚拟吻合的逼真纹理
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10013
A. Zalesny, D. D. Maur, R. Paget, M. Vergauwen, L. Gool
In the construction of 3D models of archaeological sites, especially during the anastylosis (piecing together dismembered remains of buildings), much more emphasis has been placed on the creation of the 3D shapes rather than on their textures. Nevertheless, the overall visual impression will often depend more on these textures than on the precision of the underlying geometry. This paper proposes a hierarchical texture modeling and synthesis technique to simulate the intricate appearances of building materials and landscapes. A macrotexture or "label map" prescribes the layout of microtextures or "subtextures". The system takes example images, e.g. of a certain vegetation landscape, as input and generates the corresponding composite texture models. From such models, arbitrary amounts of similar, non-repetitive texture can be generated (i.e. without verbatim copying). The creation of the composite texture models follows a kind of bootstrap procedure, where simple texture features help to generate the label map and then more complicated texture descriptions are called on for the subtextures.
在建造考古遗址的3D模型时,特别是在拼合(将肢解的建筑物残骸拼凑在一起)的过程中,更多的重点放在3D形状的创建上,而不是它们的纹理上。然而,整体视觉印象往往更多地取决于这些纹理,而不是底层几何的精度。本文提出了一种层次纹理建模和合成技术,以模拟建筑材料和景观的复杂外观。宏观纹理或“标签地图”规定了微纹理或“子纹理”的布局。该系统以某一植被景观的样例图像为输入,生成相应的复合纹理模型。从这些模型中,可以生成任意数量的相似,非重复的纹理(即无需逐字复制)。复合纹理模型的创建遵循一种引导过程,其中简单的纹理特征帮助生成标签映射,然后为子纹理调用更复杂的纹理描述。
{"title":"Realistic Textures for Virtual Anastylosis","authors":"A. Zalesny, D. D. Maur, R. Paget, M. Vergauwen, L. Gool","doi":"10.1109/CVPRW.2003.10013","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10013","url":null,"abstract":"In the construction of 3D models of archaeological sites, especially during the anastylosis (piecing together dismembered remains of buildings), much more emphasis has been placed on the creation of the 3D shapes rather than on their textures. Nevertheless, the overall visual impression will often depend more on these textures than on the precision of the underlying geometry. This paper proposes a hierarchical texture modeling and synthesis technique to simulate the intricate appearances of building materials and landscapes. A macrotexture or \"label map\" prescribes the layout of microtextures or \"subtextures\". The system takes example images, e.g. of a certain vegetation landscape, as input and generates the corresponding composite texture models. From such models, arbitrary amounts of similar, non-repetitive texture can be generated (i.e. without verbatim copying). The creation of the composite texture models follows a kind of bootstrap procedure, where simple texture features help to generate the label map and then more complicated texture descriptions are called on for the subtextures.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128015168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Structure from Small Baseline Motion with Central Panoramic Cameras 结构从小基线运动与中央全景相机
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10077
O. Shakernia, R. Vidal, S. Sastry
In applications of egomotion estimation, such as real-time vision-based navigation, one must deal with the double-edged sword of small relative motions between images. On one hand, tracking feature points is easier, while on the other, two-view structure-from-motion algorithms are poorly conditioned due to the low signal-to-noise ratio. In this paper, we derive a multi-frame structure from motion algorithm for calibrated central panoramic cameras. Our algorithm avoids the conditioning problem by explicitly incorporating the small baseline assumption in the algorithm's design. The proposed algorithm is linear, amenable to real-time implementation, and performs well in the small baseline domain for which it is designed.
在自运动估计的应用中,例如基于实时视觉的导航,必须处理图像之间小的相对运动的双刃剑。一方面,跟踪特征点比较容易,另一方面,由于低信噪比,双视图运动结构算法条件较差。本文提出了一种多帧结构的运动算法,用于标定中央全景相机。我们的算法通过在算法设计中明确地引入小基线假设来避免条件反射问题。该算法是线性的,易于实时实现,并且在其设计的小基线域内表现良好。
{"title":"Structure from Small Baseline Motion with Central Panoramic Cameras","authors":"O. Shakernia, R. Vidal, S. Sastry","doi":"10.1109/CVPRW.2003.10077","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10077","url":null,"abstract":"In applications of egomotion estimation, such as real-time vision-based navigation, one must deal with the double-edged sword of small relative motions between images. On one hand, tracking feature points is easier, while on the other, two-view structure-from-motion algorithms are poorly conditioned due to the low signal-to-noise ratio. In this paper, we derive a multi-frame structure from motion algorithm for calibrated central panoramic cameras. Our algorithm avoids the conditioning problem by explicitly incorporating the small baseline assumption in the algorithm's design. The proposed algorithm is linear, amenable to real-time implementation, and performs well in the small baseline domain for which it is designed.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130962900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
The Beauvais Cathedral Project 博韦大教堂项目
Pub Date : 2003-06-16 DOI: 10.1109/CVPRW.2003.10004
P. Allen, Alejandro J. Troccoli, Benjamin Smith, I. Stamos, Stephen Murray
Preserving cultural heritage and historic sites is an important problem. These sites are subject to erosion, vandalism, and as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using 3-D model building technology as they currently are, so preservationists can track changes, foresee structural problems, and allow a wider audience to "virtually" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This paper discusses new methods that can reduce the time to build a model using automatic methods. Examples of these methods are shown in reconstructing a model of the Cathedral of Saint-Pierre in Beauvais, France.
保护文化遗产和历史遗迹是一个重要的问题。这些遗址受到侵蚀,遭到破坏,作为长期存在的文物,它们经历了许多阶段的建设,破坏和修复。使用3d模型建造技术对这些遗址保持准确的记录是很重要的,这样保护主义者就可以跟踪变化,预见结构问题,并让更多的观众“虚拟”地看到和参观这些遗址。由于这些站点的复杂性,构建3-D模型既耗时又困难,通常需要大量的手工工作。本文讨论了利用自动化方法减少建模时间的新方法。在重建法国博韦圣皮埃尔大教堂的模型时,展示了这些方法的例子。
{"title":"The Beauvais Cathedral Project","authors":"P. Allen, Alejandro J. Troccoli, Benjamin Smith, I. Stamos, Stephen Murray","doi":"10.1109/CVPRW.2003.10004","DOIUrl":"https://doi.org/10.1109/CVPRW.2003.10004","url":null,"abstract":"Preserving cultural heritage and historic sites is an important problem. These sites are subject to erosion, vandalism, and as long-lived artifacts, they have gone through many phases of construction, damage and repair. It is important to keep an accurate record of these sites using 3-D model building technology as they currently are, so preservationists can track changes, foresee structural problems, and allow a wider audience to \"virtually\" see and tour these sites. Due to the complexity of these sites, building 3-D models is time consuming and difficult, usually involving much manual effort. This paper discusses new methods that can reduce the time to build a model using automatic methods. Examples of these methods are shown in reconstructing a model of the Cathedral of Saint-Pierre in Beauvais, France.","PeriodicalId":121249,"journal":{"name":"2003 Conference on Computer Vision and Pattern Recognition Workshop","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123761599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2003 Conference on Computer Vision and Pattern Recognition Workshop
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1