首页 > 最新文献

Proceedings of 1994 IEEE Workshop on Applications of Computer Vision最新文献

英文 中文
A system for aircraft recognition in perspective aerial images 透视航拍图像中的飞机识别系统
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341305
Subhodev Das, B. Bhanu, Xingzhi. Wu, R. Braithwaite
Recognition of aircraft in complex, perspective aerial imagery has to be accomplished in presence of clutter, occlusion, shadow, and various forms of image degradation. This paper presents a system for aircraft recognition under real-world conditions that is based on the use of a hierarchical database of object models. The particular approach involves three key processes: (a) The qualitative object recognition process performs model-based symbolic feature extraction and generic object recognition; (b) The refocused matching and evaluation process refines the extracted features for more specific classification with input from (a); and (c) The primitive feature extraction process regulates the extracted features based on their saliency and interacts with (a) and (b). Experimental results showing the qualitative recognition of aircraft in perspective, aerial images are presented.<>
在复杂的透视航空图像中识别飞机必须在杂波、遮挡、阴影和各种形式的图像退化的情况下完成。本文提出了一种基于目标模型分层数据库的真实条件下的飞机识别系统。具体方法涉及三个关键过程:(a)定性对象识别过程执行基于模型的符号特征提取和一般对象识别;(b)根据(a)的输入,重新集中匹配和评价过程改进所提取的特征,以便进行更具体的分类;(c)原始特征提取过程根据提取的特征的显著性对其进行调节,并与(a)和(b)相互作用。实验结果显示了透视航拍图像中飞机的定性识别
{"title":"A system for aircraft recognition in perspective aerial images","authors":"Subhodev Das, B. Bhanu, Xingzhi. Wu, R. Braithwaite","doi":"10.1109/ACV.1994.341305","DOIUrl":"https://doi.org/10.1109/ACV.1994.341305","url":null,"abstract":"Recognition of aircraft in complex, perspective aerial imagery has to be accomplished in presence of clutter, occlusion, shadow, and various forms of image degradation. This paper presents a system for aircraft recognition under real-world conditions that is based on the use of a hierarchical database of object models. The particular approach involves three key processes: (a) The qualitative object recognition process performs model-based symbolic feature extraction and generic object recognition; (b) The refocused matching and evaluation process refines the extracted features for more specific classification with input from (a); and (c) The primitive feature extraction process regulates the extracted features based on their saliency and interacts with (a) and (b). Experimental results showing the qualitative recognition of aircraft in perspective, aerial images are presented.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126906138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Anatomy of a hand-filled form reader 手工填表阅读器的解剖
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341309
A. K. Chhabra
We describe a prototype generic form reader (GFR) system for reading hand-filled forms. The system can read run-on or touching handprinted characters. A one-time form specification is required for each type of form that the system is expected to read. The form specification includes geometric location of registration marks and fields of interest, field grammars, and system parameters. The GFR begins by detecting registration marks, computing image skew, extracting deskewed fields, and computing connected components in the field images. Next, the connected components are split into segments using heuristics about good splitting points. The system is liberal in splitting, i.e., a split segment could be a part of a character or a complete character, and hopefully no more than a character. Next, the segments are adaptively regrouped into 'seg-groups' with the aid of a dynamic programming algorithm that matches the character answers for the seg-groups with the field grammar specification. The single character recognizer (SCR) uses high order combinations of raw geometric features derived from segments and seg-groups. The high order combining rules are derived by statistical discriminant analysis of raw features. The GFR system provides some generic tools that can be applied to other document image analysis problems besides forms reading.<>
我们描述了一个原型通用表单阅读器(GFR)系统,用于读取手工填写的表单。该系统可以读取运行或触摸手写字符。对于系统期望读取的每种类型的表单,都需要一次性的表单规范。表单规范包括注册标记和感兴趣字段的几何位置、字段语法和系统参数。GFR首先检测配准标记,计算图像倾斜,提取倾斜场,计算场图像中的连接分量。接下来,使用启发式方法将连接的组件分成分段。系统在分割上是自由的,也就是说,一个分割的片段可以是一个字符的一部分,也可以是一个完整的字符,希望不超过一个字符。接下来,在动态规划算法的帮助下,将片段自适应地重新分组为“分段组”,该算法将分段组的字符答案与字段语法规范相匹配。单字符识别器(SCR)使用来自段和段组的原始几何特征的高阶组合。通过对原始特征的统计判别分析,推导出高阶组合规则。GFR系统提供了一些通用的工具,可以应用于除表单读取之外的其他文档图像分析问题
{"title":"Anatomy of a hand-filled form reader","authors":"A. K. Chhabra","doi":"10.1109/ACV.1994.341309","DOIUrl":"https://doi.org/10.1109/ACV.1994.341309","url":null,"abstract":"We describe a prototype generic form reader (GFR) system for reading hand-filled forms. The system can read run-on or touching handprinted characters. A one-time form specification is required for each type of form that the system is expected to read. The form specification includes geometric location of registration marks and fields of interest, field grammars, and system parameters. The GFR begins by detecting registration marks, computing image skew, extracting deskewed fields, and computing connected components in the field images. Next, the connected components are split into segments using heuristics about good splitting points. The system is liberal in splitting, i.e., a split segment could be a part of a character or a complete character, and hopefully no more than a character. Next, the segments are adaptively regrouped into 'seg-groups' with the aid of a dynamic programming algorithm that matches the character answers for the seg-groups with the field grammar specification. The single character recognizer (SCR) uses high order combinations of raw geometric features derived from segments and seg-groups. The high order combining rules are derived by statistical discriminant analysis of raw features. The GFR system provides some generic tools that can be applied to other document image analysis problems besides forms reading.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126881978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Frameless registration of MR and CT 3D volumetric data sets MR和CT三维体积数据集的无框配准
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341316
Rakesh Kumar, Kristin J. Dana, P. Anandan, Neil E. Okamoto, J. Bergen, P. Hemler, T. Sumanaweera, P. Elsen, J. Adler
In this paper we present techniques for frameless registration of 3D Magnetic Resonance (MR) and Computed Tomography (CT) volumetric data of the head and spine. We present techniques for estimating a 3D affine or rigid transform which can be used to resample the CT (or MR) data to align with the MR (or CT) data. Our technique transforms the MR and CT data sets with spatial filters so they can be directly matched. The matching is done by a direct optimization technique using a gradient based descent approach and a coarse-to-fine control strategy over a 4D pyramid. We present results on registering the head and spine data by matching 3D edges and results on registering cranial ventricle data by matching images filtered by a Laplacian of a Gaussian.<>
在本文中,我们提出了头部和脊柱的三维磁共振(MR)和计算机断层扫描(CT)体积数据的无帧配准技术。我们提出了估计3D仿射或刚性变换的技术,可用于重新采样CT(或MR)数据以与MR(或CT)数据对齐。我们的技术用空间滤波器对MR和CT数据集进行变换,使它们可以直接匹配。匹配是通过使用基于梯度的下降方法和在4D金字塔上的粗到精控制策略的直接优化技术完成的。我们给出了通过匹配三维边缘来匹配头部和脊柱数据的结果,以及通过匹配高斯拉普拉斯算子过滤的图像来匹配脑室数据的结果
{"title":"Frameless registration of MR and CT 3D volumetric data sets","authors":"Rakesh Kumar, Kristin J. Dana, P. Anandan, Neil E. Okamoto, J. Bergen, P. Hemler, T. Sumanaweera, P. Elsen, J. Adler","doi":"10.1109/ACV.1994.341316","DOIUrl":"https://doi.org/10.1109/ACV.1994.341316","url":null,"abstract":"In this paper we present techniques for frameless registration of 3D Magnetic Resonance (MR) and Computed Tomography (CT) volumetric data of the head and spine. We present techniques for estimating a 3D affine or rigid transform which can be used to resample the CT (or MR) data to align with the MR (or CT) data. Our technique transforms the MR and CT data sets with spatial filters so they can be directly matched. The matching is done by a direct optimization technique using a gradient based descent approach and a coarse-to-fine control strategy over a 4D pyramid. We present results on registering the head and spine data by matching 3D edges and results on registering cranial ventricle data by matching images filtered by a Laplacian of a Gaussian.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Model supported exploitation: quick look, detection and counting, and change detection 模型支持的开发:快速查看、检测和计数,以及变更检测
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341302
C. Huang, J. Mundy, Charlie Rothwell
Over the last several years the concept of model-supported exploitation (MSE) has evolved to a point where relatively simple computer vision algorithms can extract significant intelligence information from aerial images in a robust and reliable manner. Information extraction is enabled by the use of detailed 3D site models which provide an extensive context for the application of image analysis algorithms. This paper reviews the basic MSE concept and illustrates the approach using three operational concepts taken from the RADIUS project, quick-look, detection and counting and focussed change detection.<>
在过去的几年里,模型支持开发(MSE)的概念已经发展到一个相对简单的计算机视觉算法可以以鲁棒和可靠的方式从航空图像中提取重要的情报信息的地步。信息提取是通过使用详细的3D站点模型实现的,该模型为图像分析算法的应用提供了广泛的背景。本文回顾了基本的MSE概念,并使用来自RADIUS项目的三个操作概念:快速查看、检测和计数以及集中变更检测来说明该方法。
{"title":"Model supported exploitation: quick look, detection and counting, and change detection","authors":"C. Huang, J. Mundy, Charlie Rothwell","doi":"10.1109/ACV.1994.341302","DOIUrl":"https://doi.org/10.1109/ACV.1994.341302","url":null,"abstract":"Over the last several years the concept of model-supported exploitation (MSE) has evolved to a point where relatively simple computer vision algorithms can extract significant intelligence information from aerial images in a robust and reliable manner. Information extraction is enabled by the use of detailed 3D site models which provide an extensive context for the application of image analysis algorithms. This paper reviews the basic MSE concept and illustrates the approach using three operational concepts taken from the RADIUS project, quick-look, detection and counting and focussed change detection.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115521490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Genetic labeling and its application to depalletizing robot vision 遗传标记及其在码垛机器人视觉中的应用
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341307
M. Hashimoto, K. Sumi
Genetic labeling is a new labeling algorithm using Genetic Algorithm (GA). Although several applications of GA for low-level image processing such as line detection have been studied, they still require much computing time. We apply GA to the labeling for scene interpretation. The chromosome coding method we proposed is such that each bit represents the existence of an object. Genetic operation enables efficient labeling based on the building block hypothesis. We have developed a vision system for depalletizing robot using this technique. Object candidates are properly labeled, and the position of cartons is recognized. Through real image experiments, we estimated that genetic labeling is about 100 times faster than an improved enumerating method. Also we have proven that the reliability and speed of this system is practical.<>
遗传标记是一种基于遗传算法的新型标记算法。虽然已经研究了遗传算法在低级图像处理中的几种应用,如线检测,但它们仍然需要大量的计算时间。我们将遗传算法应用于场景解释的标记。我们提出的染色体编码方法是这样的,每个比特代表一个对象的存在。遗传操作使基于构建块假设的有效标记成为可能。我们利用这种技术开发了一种用于码垛机器人的视觉系统。候选对象被适当地标记,并且纸箱的位置被识别。通过实像实验,我们估计遗传标记比改进的枚举方法快100倍左右。同时也证明了该系统的可靠性和速度是切实可行的
{"title":"Genetic labeling and its application to depalletizing robot vision","authors":"M. Hashimoto, K. Sumi","doi":"10.1109/ACV.1994.341307","DOIUrl":"https://doi.org/10.1109/ACV.1994.341307","url":null,"abstract":"Genetic labeling is a new labeling algorithm using Genetic Algorithm (GA). Although several applications of GA for low-level image processing such as line detection have been studied, they still require much computing time. We apply GA to the labeling for scene interpretation. The chromosome coding method we proposed is such that each bit represents the existence of an object. Genetic operation enables efficient labeling based on the building block hypothesis. We have developed a vision system for depalletizing robot using this technique. Object candidates are properly labeled, and the position of cartons is recognized. Through real image experiments, we estimated that genetic labeling is about 100 times faster than an improved enumerating method. Also we have proven that the reliability and speed of this system is practical.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126160172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An automated stereoscopic coal profiling system-CCLPS 自动立体煤剖面系统cclps
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341283
Philip W. Smith, N. Nandhakumar
This paper describes the design of a binocular stereo system called CCLPS (Computerized Coal Profiling System) that provides dense, accurate disparity maps of coal as it is being transported in open rail cars. After a quantitative analysis of previously developed cepstral correspondence techniques which highlights the shortcomings of the cepstrum's matching ability in the presence of random noise and severe foreshortening distortion, we present a modified power cepstral approach that is less sensitive to these effects, along with analytical arguments verifying its robustness. The design of the CCLPS system is then discussed in detail and its performance is verified.<>
本文介绍了一种称为CCLPS(计算机化煤剖面系统)的双目立体系统的设计,该系统可以在煤在开放式轨道车辆中运输时提供密集、准确的视差图。在对先前开发的倒谱对应技术进行定量分析之后,该技术突出了在随机噪声和严重的前缩失真存在下倒谱匹配能力的缺点,我们提出了一种改进的功率倒谱方法,该方法对这些影响不太敏感,并通过分析论证验证了其鲁棒性。然后详细讨论了CCLPS系统的设计,并对其性能进行了验证。
{"title":"An automated stereoscopic coal profiling system-CCLPS","authors":"Philip W. Smith, N. Nandhakumar","doi":"10.1109/ACV.1994.341283","DOIUrl":"https://doi.org/10.1109/ACV.1994.341283","url":null,"abstract":"This paper describes the design of a binocular stereo system called CCLPS (Computerized Coal Profiling System) that provides dense, accurate disparity maps of coal as it is being transported in open rail cars. After a quantitative analysis of previously developed cepstral correspondence techniques which highlights the shortcomings of the cepstrum's matching ability in the presence of random noise and severe foreshortening distortion, we present a modified power cepstral approach that is less sensitive to these effects, along with analytical arguments verifying its robustness. The design of the CCLPS system is then discussed in detail and its performance is verified.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126602373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image mosaicing for tele-reality applications 用于远程现实应用的图像拼接
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341287
R. Szeliski
This paper presents some techniques for automatically deriving realistic 2-D scenes and 3-D geometric models from video sequences. These techniques can be used to build environments and 3-D models for virtual reality application based on recreating a true scene, i.e., tele-reality applications. The fundamental technique used in this paper is image mosaicing, i.e., the automatic alignment of multiple images into larger aggregates which are then used to represent portions of a 3-D scene. The paper first examines the easiest problems, those of flat scene and panoramic scene mosaicing. It then progresses to more complicated scenes with depth, and concludes with full 3-D models. The paper also discusses a number of novel applications based on tele-reality technology.<>
本文介绍了从视频序列中自动生成逼真的二维场景和三维几何模型的一些技术。这些技术可用于在再现真实场景的基础上为虚拟现实应用构建环境和三维模型,即远程现实应用。本文使用的基本技术是图像拼接,即将多个图像自动对齐成更大的聚合,然后用于表示3d场景的部分。本文首先研究了平面场景和全景场景拼接中最容易遇到的问题。然后,它会深入到更复杂的场景,并以完整的3d模型结束。本文还讨论了基于远程现实技术的一些新应用
{"title":"Image mosaicing for tele-reality applications","authors":"R. Szeliski","doi":"10.1109/ACV.1994.341287","DOIUrl":"https://doi.org/10.1109/ACV.1994.341287","url":null,"abstract":"This paper presents some techniques for automatically deriving realistic 2-D scenes and 3-D geometric models from video sequences. These techniques can be used to build environments and 3-D models for virtual reality application based on recreating a true scene, i.e., tele-reality applications. The fundamental technique used in this paper is image mosaicing, i.e., the automatic alignment of multiple images into larger aggregates which are then used to represent portions of a 3-D scene. The paper first examines the easiest problems, those of flat scene and panoramic scene mosaicing. It then progresses to more complicated scenes with depth, and concludes with full 3-D models. The paper also discusses a number of novel applications based on tele-reality technology.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121467766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 515
Model validation for change detection [machine vision] 变化检测的模型验证[机器视觉]
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341304
M. Bejanin, A. Huertas, G. Medioni, R. Nevatia
An important application of machine vision is to provide a means to monitor a scene over a period of time and report changes in the content of the scene. We have developed a validation mechanism that implements the first step towards a system for detecting changes in images of aerial scenes. By validation we mean the confirmation of the presence of model objects in the image. Our system uses a 3-D site model of the scene as a basis for model validation, and eventually for detecting changes and to update the site model. The scenario for our present validation system consists of adding a new image to a database associated with the site. The validation process is implemented in three steps: registration of the image to the model, or equivalently, determination of the position and orientation of the camera; matching of model features to image features; and validation of the objects in the model. Our system processes the new image monocularly and uses shadows as 3-D clues to help validate the model. The system has been tested using a hand-generated site model and several images of a 500:1 scale model of the site, acquired form several viewpoints.<>
机器视觉的一个重要应用是提供一种在一段时间内监控场景并报告场景内容变化的手段。我们已经开发了一种验证机制,实现了检测航拍场景图像变化系统的第一步。通过验证,我们的意思是确认图像中存在模型对象。我们的系统使用场景的三维站点模型作为模型验证的基础,最终用于检测变化并更新站点模型。我们当前验证系统的场景包括向与站点关联的数据库中添加新图像。验证过程分三步实现:将图像配准到模型,或者等效地确定相机的位置和方向;模型特征与图像特征的匹配;以及模型中对象的验证。我们的系统单目处理新图像,并使用阴影作为3d线索来帮助验证模型。该系统已经使用手工生成的站点模型和从几个视点获得的500:1比例的站点模型的几张图像进行了测试
{"title":"Model validation for change detection [machine vision]","authors":"M. Bejanin, A. Huertas, G. Medioni, R. Nevatia","doi":"10.1109/ACV.1994.341304","DOIUrl":"https://doi.org/10.1109/ACV.1994.341304","url":null,"abstract":"An important application of machine vision is to provide a means to monitor a scene over a period of time and report changes in the content of the scene. We have developed a validation mechanism that implements the first step towards a system for detecting changes in images of aerial scenes. By validation we mean the confirmation of the presence of model objects in the image. Our system uses a 3-D site model of the scene as a basis for model validation, and eventually for detecting changes and to update the site model. The scenario for our present validation system consists of adding a new image to a database associated with the site. The validation process is implemented in three steps: registration of the image to the model, or equivalently, determination of the position and orientation of the camera; matching of model features to image features; and validation of the objects in the model. Our system processes the new image monocularly and uses shadows as 3-D clues to help validate the model. The system has been tested using a hand-generated site model and several images of a 500:1 scale model of the site, acquired form several viewpoints.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134560910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Compilation of mosaics from separately scanned line drawings 从单独扫描的线条图中编译马赛克
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341286
R. D. T. Janssen, A. Vossepoel
In automatic line drawing interpretation (e.g. map interpretation), one of the problems encountered is the finite size of scanners, or that scanners of the required size are not available. Often, one large scan is necessary for the interpretation process, instead of several smaller ones generated by the usual scanning in parts. This paper describes a method for automatically compiling mosaics from separately scanned line drawings. A mosaic is a collection of separately obtained images which are combined to form one larger image. The method is based on vectorization of the line drawings, which is used to select the control points for a geometric transformation automatically. It is not necessary to specify the overlap area between the line drawings. The resulting system is evaluated using large scale maps. Experiments with different overlaps between the line drawings were done. Results are good: the algorithm succeeds in finding accurate parameters for the transformation.<>
在自动划线解释(如地图解释)中,遇到的问题之一是扫描仪的尺寸有限,或者所需尺寸的扫描仪不可用。通常,解释过程需要一次大的扫描,而不是由通常的部分扫描产生的几个较小的扫描。本文介绍了一种从单独扫描的线条图中自动编译马赛克的方法。马赛克是单独获得的图像的集合,这些图像被组合成一个更大的图像。该方法基于线形图的矢量化,用于自动选择控制点进行几何变换。不需要指定线条图之间的重叠区域。使用大比例尺地图评估生成的系统。在线条图之间进行了不同重叠的实验。结果表明:该算法成功地找到了精确的变换参数。
{"title":"Compilation of mosaics from separately scanned line drawings","authors":"R. D. T. Janssen, A. Vossepoel","doi":"10.1109/ACV.1994.341286","DOIUrl":"https://doi.org/10.1109/ACV.1994.341286","url":null,"abstract":"In automatic line drawing interpretation (e.g. map interpretation), one of the problems encountered is the finite size of scanners, or that scanners of the required size are not available. Often, one large scan is necessary for the interpretation process, instead of several smaller ones generated by the usual scanning in parts. This paper describes a method for automatically compiling mosaics from separately scanned line drawings. A mosaic is a collection of separately obtained images which are combined to form one larger image. The method is based on vectorization of the line drawings, which is used to select the control points for a geometric transformation automatically. It is not necessary to specify the overlap area between the line drawings. The resulting system is evaluated using large scale maps. Experiments with different overlaps between the line drawings were done. Results are good: the algorithm succeeds in finding accurate parameters for the transformation.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133780193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Modelling issues in vision based aircraft navigation during landing 基于视觉的飞机着陆导航建模问题
Pub Date : 1994-12-05 DOI: 10.1109/ACV.1994.341293
Tarun Soni, B. Sridhar
This paper investigates the the feasibility of using visual and infrared imaging sensors to aid in the location of the aircraft position during operations such as landing in bad weather. The choice of the airport model used is crucial to algorithms which are used for position estimation based on pattern recognition. In this paper we describe the effects the choice of a model has on the behaviour of such matching algorithms. Three basic models are chosen: a line segment based model, an area based model and a texture based model. It is seen that a sparse line segment based model is not adequate to identify the runway since it matches a number of false artifacts in the image. An enhanced line segment based model containing a large number of features compares favourably with the area based model. The texture based model is seen to need a number of camera and weather dependent parameters and the performance of such a scheme is not seen to be substantially better. Thus either a proper area based model or a pseudo-area based model (based on a very large number of line features) can be seen to provide the best performance for such landmark identification and position determination algorithms.<>
本文研究了在恶劣天气降落等作战过程中,利用视觉和红外成像传感器辅助飞机位置定位的可行性。在基于模式识别的位置估计算法中,机场模型的选择至关重要。在本文中,我们描述了模型的选择对这种匹配算法的行为的影响。选择了三种基本模型:基于线段的模型、基于面积的模型和基于纹理的模型。可以看出,基于稀疏线段的模型不足以识别跑道,因为它与图像中的许多虚假伪影相匹配。基于线段的增强模型包含大量的特征,与基于区域的模型比较有利。基于纹理的模型需要许多相机和天气相关的参数,并且这种方案的性能并没有明显更好。因此,适当的基于区域的模型或伪基于区域的模型(基于非常大量的线特征)可以为此类地标识别和位置确定算法提供最佳性能。
{"title":"Modelling issues in vision based aircraft navigation during landing","authors":"Tarun Soni, B. Sridhar","doi":"10.1109/ACV.1994.341293","DOIUrl":"https://doi.org/10.1109/ACV.1994.341293","url":null,"abstract":"This paper investigates the the feasibility of using visual and infrared imaging sensors to aid in the location of the aircraft position during operations such as landing in bad weather. The choice of the airport model used is crucial to algorithms which are used for position estimation based on pattern recognition. In this paper we describe the effects the choice of a model has on the behaviour of such matching algorithms. Three basic models are chosen: a line segment based model, an area based model and a texture based model. It is seen that a sparse line segment based model is not adequate to identify the runway since it matches a number of false artifacts in the image. An enhanced line segment based model containing a large number of features compares favourably with the area based model. The texture based model is seen to need a number of camera and weather dependent parameters and the performance of such a scheme is not seen to be substantially better. Thus either a proper area based model or a pseudo-area based model (based on a very large number of line features) can be seen to provide the best performance for such landmark identification and position determination algorithms.<<ETX>>","PeriodicalId":437089,"journal":{"name":"Proceedings of 1994 IEEE Workshop on Applications of Computer Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1994-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115144427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
Proceedings of 1994 IEEE Workshop on Applications of Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1