首页 > 最新文献

35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)最新文献

英文 中文
Symbolic Road Perception-based Autonomous Driving in Urban Environments 城市环境中基于符号道路感知的自动驾驶
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.38
Mike Foedisch, R. Madhavan, C. Schlenoff
Our previous work on road detection for autonomous road vehicles suggests the usage of high-level symbolic knowledge about the road structure. In this paper, we present our new approach to symbolic road recognition. We explain feature extraction, model representation, and the tree search-based matching processes and discuss performance evaluation results.
我们之前在自动道路车辆道路检测方面的工作建议使用关于道路结构的高级符号知识。本文提出了一种新的道路符号识别方法。我们解释了特征提取、模型表示和基于树搜索的匹配过程,并讨论了性能评估结果。
{"title":"Symbolic Road Perception-based Autonomous Driving in Urban Environments","authors":"Mike Foedisch, R. Madhavan, C. Schlenoff","doi":"10.1109/AIPR.2006.38","DOIUrl":"https://doi.org/10.1109/AIPR.2006.38","url":null,"abstract":"Our previous work on road detection for autonomous road vehicles suggests the usage of high-level symbolic knowledge about the road structure. In this paper, we present our new approach to symbolic road recognition. We explain feature extraction, model representation, and the tree search-based matching processes and discuss performance evaluation results.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132192139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Some Unmixing Problems and Algorithms in Spectroscopy and Hyperspectral Imaging 光谱学和高光谱成像中的若干解混问题及算法
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.37
M. Berman
The automated identification and mapping of the constituent materials in a hyperspectral image is a problem of considerable interest. A significant issue is that the spectra at many pixels in such an image are actually mixtures of the spectra of the pure constituents. I review methods of "unmixing" spectra into their pure constituents, both when a "spectral library" of the pure constituents is available, and where no such library is available. Our own algorithms in both these areas are exemplified with a mineral and a biological example.
高光谱图像中组成物质的自动识别和映射是一个相当有趣的问题。一个重要的问题是,在这样的图像中,许多像素的光谱实际上是纯成分光谱的混合物。我回顾了将光谱“分解”成纯成分的方法,当纯成分的“光谱库”可用时,以及没有这样的库可用时。我们在这两个领域的算法都以矿物和生物为例。
{"title":"Some Unmixing Problems and Algorithms in Spectroscopy and Hyperspectral Imaging","authors":"M. Berman","doi":"10.1109/AIPR.2006.37","DOIUrl":"https://doi.org/10.1109/AIPR.2006.37","url":null,"abstract":"The automated identification and mapping of the constituent materials in a hyperspectral image is a problem of considerable interest. A significant issue is that the spectra at many pixels in such an image are actually mixtures of the spectra of the pure constituents. I review methods of \"unmixing\" spectra into their pure constituents, both when a \"spectral library\" of the pure constituents is available, and where no such library is available. Our own algorithms in both these areas are exemplified with a mineral and a biological example.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123639665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Cramer-Rao Lower Bound Calculations for Registration of Linearly-Filtered Images 线性滤波图像配准的Cramer-Rao下界计算
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.19
D. Tyler
I have developed covariance matrix expressions for the registration Cramer-Rao lower bound for images processed with a linear filter. These results also generalize a previous registration CRLB by accounting for Poisson noise as well as read noise. Expressions shown here have been translated into a Fortran 90 code currently being tested.
我开发了协方差矩阵表达式,用于用线性滤波器处理的图像的注册Cramer-Rao下界。这些结果还通过考虑泊松噪声和读噪声来推广以前的配准CRLB。这里显示的表达式已被翻译成目前正在测试的Fortran 90代码。
{"title":"Cramer-Rao Lower Bound Calculations for Registration of Linearly-Filtered Images","authors":"D. Tyler","doi":"10.1109/AIPR.2006.19","DOIUrl":"https://doi.org/10.1109/AIPR.2006.19","url":null,"abstract":"I have developed covariance matrix expressions for the registration Cramer-Rao lower bound for images processed with a linear filter. These results also generalize a previous registration CRLB by accounting for Poisson noise as well as read noise. Expressions shown here have been translated into a Fortran 90 code currently being tested.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122255684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Identification of Shot Body Region from Clinical Photographies 临床影像中射击体区域的自动识别
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.17
H. Iyatomi, H. Oka, Masaru Tanaka, K. Ogawa
Administration of clinical photographs taken by commonly used digital camera often requires troublesome manual operation. In this paper, we made a prototype scheme of automatic photographed area identification from clinical images to help or reduce administration task. A total of 8047 clinical photographs taken in department of dermatology, Keio University Hospital, were classified into 11 categories; head, hair, upper limb, lower limb, trunk, palm, sole, back of hand, back of foot, finger & detent and genital; to meet request by several dermatologists and we developed separate linear classifiers for each body region. The developed classifiers achieved an 82.8% in sensitivity (SE) and an 82.0% of specificity (SP) in average. In addition, integration of these classifiers with consideration of the feature space of each body region improved SP of 2.3% and precision (PR) of 3.0% at a maximum when the classification threshold was set to around 75% in SE. The proposed scheme requires only photographs to identify the photographed area and therefore it can be easily applied for DICOM (digital image and communication in medicine) system that is commonly used in clinical practice or other medical database systems.
常用数码相机拍摄的临床照片管理往往需要繁琐的人工操作。本文提出了一种基于临床图像的自动拍摄区域识别的原型方案,以帮助或减少管理任务。将在庆应义塾大学医院皮肤科拍摄的临床照片8047张,分为11类;头、头发、上肢、下肢、躯干、手掌、脚掌、手背、脚背、手指和牙齿、生殖器;为了满足几位皮肤科医生的要求,我们为每个身体区域开发了单独的线性分类器。所开发的分类器平均灵敏度为82.8%,特异性为82.0%。此外,在SE中,当分类阈值设置为75%左右时,考虑每个身体区域的特征空间的这些分类器的整合,最大提高了2.3%的SP和3.0%的精度(PR)。所提出的方案只需要照片来识别拍摄区域,因此可以很容易地应用于临床实践中常用的DICOM(医学数字图像和通信)系统或其他医学数据库系统。
{"title":"Automatic Identification of Shot Body Region from Clinical Photographies","authors":"H. Iyatomi, H. Oka, Masaru Tanaka, K. Ogawa","doi":"10.1109/AIPR.2006.17","DOIUrl":"https://doi.org/10.1109/AIPR.2006.17","url":null,"abstract":"Administration of clinical photographs taken by commonly used digital camera often requires troublesome manual operation. In this paper, we made a prototype scheme of automatic photographed area identification from clinical images to help or reduce administration task. A total of 8047 clinical photographs taken in department of dermatology, Keio University Hospital, were classified into 11 categories; head, hair, upper limb, lower limb, trunk, palm, sole, back of hand, back of foot, finger & detent and genital; to meet request by several dermatologists and we developed separate linear classifiers for each body region. The developed classifiers achieved an 82.8% in sensitivity (SE) and an 82.0% of specificity (SP) in average. In addition, integration of these classifiers with consideration of the feature space of each body region improved SP of 2.3% and precision (PR) of 3.0% at a maximum when the classification threshold was set to around 75% in SE. The proposed scheme requires only photographs to identify the photographed area and therefore it can be easily applied for DICOM (digital image and communication in medicine) system that is commonly used in clinical practice or other medical database systems.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126706879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Alignment of Color Imagery onto 3D Laser Radar Data 彩色图像与三维激光雷达数据的自动对齐
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.16
A. Vasile, Frederick R. Waugh, Daniel Greisokh, R. Heinrichs
We present an algorithm for the automatic fusion of city-sized, 2D color imagery to 3D laser radar imagery collected from distinct airborne platforms at different times. Our approach is to derive pseudo-intensity images from ladar imagery and to align these with color imagery using conventional 2D registration algorithms. To construct a pseudo-intensity image, the algorithm uses the color imagery's time of day and location to predict shadows in the 3D image, then determines ambient and sun lighting conditions by histogram matching the 3D-derived shadowed and non-shadowed regions to their 2D counterparts. A projection matrix is computed to bring the pseudo- image into 2D image coordinates, resulting in an initial alignment of the imagery to within 200 meters. Finally, the 2D intensity image and 3D generated pseudo-intensity image are registered using a modified normalized correlation algorithm to solve for rotation, translation, scale and lens distortion, resulting in a fused data set that is aligned to within 1 meter. Applications of the presented work include the areas of augmented reality and scene interpretation for persistent surveillance in heavily cluttered and occluded environments.
我们提出了一种自动融合城市大小的二维彩色图像到三维激光雷达图像的算法,这些图像在不同时间从不同的机载平台收集。我们的方法是从雷达图像中获得伪强度图像,并使用传统的2D配准算法将这些图像与彩色图像对齐。为了构建伪强度图像,该算法使用彩色图像的时间和位置来预测3D图像中的阴影,然后通过直方图将3D衍生的阴影和非阴影区域与其2D对应区域进行匹配,确定环境和太阳光照条件。通过计算投影矩阵将伪图像转换为二维图像坐标,使图像的初始对齐精度在200米以内。最后,利用改进的归一化相关算法对二维强度图像和三维生成的伪强度图像进行配准,求解旋转、平移、比例和透镜畸变等问题,得到对齐精度在1米以内的融合数据集。所提出的工作的应用包括增强现实和场景解释领域,用于在严重混乱和闭塞的环境中进行持续监视。
{"title":"Automatic Alignment of Color Imagery onto 3D Laser Radar Data","authors":"A. Vasile, Frederick R. Waugh, Daniel Greisokh, R. Heinrichs","doi":"10.1109/AIPR.2006.16","DOIUrl":"https://doi.org/10.1109/AIPR.2006.16","url":null,"abstract":"We present an algorithm for the automatic fusion of city-sized, 2D color imagery to 3D laser radar imagery collected from distinct airborne platforms at different times. Our approach is to derive pseudo-intensity images from ladar imagery and to align these with color imagery using conventional 2D registration algorithms. To construct a pseudo-intensity image, the algorithm uses the color imagery's time of day and location to predict shadows in the 3D image, then determines ambient and sun lighting conditions by histogram matching the 3D-derived shadowed and non-shadowed regions to their 2D counterparts. A projection matrix is computed to bring the pseudo- image into 2D image coordinates, resulting in an initial alignment of the imagery to within 200 meters. Finally, the 2D intensity image and 3D generated pseudo-intensity image are registered using a modified normalized correlation algorithm to solve for rotation, translation, scale and lens distortion, resulting in a fused data set that is aligned to within 1 meter. Applications of the presented work include the areas of augmented reality and scene interpretation for persistent surveillance in heavily cluttered and occluded environments.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116217287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Semi-automated 3-D Building Extraction from Stereo Imagery 基于立体图像的半自动化三维建筑提取
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.36
S. Lee, K. Price, R. Nevatia, T. Heinze, J. Irvine
The production of geospatial information from overhead imagery is generally a labor-intensive process. Analysts must accurately delineate and extract important features, such as buildings, roads, and landcover from the imagery. Automated feature extraction (AFE) tools offer the prospect of reducing analyst's workload. This paper presents a new tool, called iMVS, for extracting buildings and discusses user testing conducted by the National Geospatial-Intelligence Agency (NGA). Using a semi-automated approach, iMVS processes two or more images to form a set of hypothesized 3-D buildings. When the user clicks on one of the building vertices, the system determines which hypothesis is the best fit and extracts the building. A set of powerful editing tools support rapid clean-up of the extraction, including extraction of complex buildings. User testing of iMVS provides an assessment of the benefits and identifies areas for system improvement.
从头顶图像中产生地理空间信息通常是一个劳动密集型的过程。分析人员必须从图像中准确地描绘和提取重要特征,如建筑物、道路和土地覆盖。自动特征提取(AFE)工具提供了减少分析人员工作量的前景。本文提出了一种名为iMVS的新工具,用于提取建筑物,并讨论了国家地理空间情报局(NGA)进行的用户测试。iMVS采用半自动化的方法,处理两张或更多的图像,形成一组假设的3-D建筑。当用户点击其中一个建筑顶点时,系统决定哪个假设是最合适的,并提取该建筑。一套强大的编辑工具支持快速清理提取,包括提取复杂的建筑物。iMVS的用户测试提供了对效益的评估,并确定了系统改进的领域。
{"title":"Semi-automated 3-D Building Extraction from Stereo Imagery","authors":"S. Lee, K. Price, R. Nevatia, T. Heinze, J. Irvine","doi":"10.1109/AIPR.2006.36","DOIUrl":"https://doi.org/10.1109/AIPR.2006.36","url":null,"abstract":"The production of geospatial information from overhead imagery is generally a labor-intensive process. Analysts must accurately delineate and extract important features, such as buildings, roads, and landcover from the imagery. Automated feature extraction (AFE) tools offer the prospect of reducing analyst's workload. This paper presents a new tool, called iMVS, for extracting buildings and discusses user testing conducted by the National Geospatial-Intelligence Agency (NGA). Using a semi-automated approach, iMVS processes two or more images to form a set of hypothesized 3-D buildings. When the user clicks on one of the building vertices, the system determines which hypothesis is the best fit and extracts the building. A set of powerful editing tools support rapid clean-up of the extraction, including extraction of complex buildings. User testing of iMVS provides an assessment of the benefits and identifies areas for system improvement.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128399237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Advanced Techniques for Watershed Visualization 流域可视化的高级技术
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.10
V. J. Alarcon, C. O'Hara
Analytical shaded relief is commonly used for visualization of digital elevation models (DEMs). Sometimes, the quality of unaltered analytical shaded relief can be lacking for identification of streams and water divides. Hydroshading is a technique that provides enhanced capabilities of visualization of hydrologically-meaningful topographical features. In this research, hydroshading algorithms are applied to NASA's Shuttle Radar Topography Mission (SRTM) DEM datasets. The visualization technique is applied to coastal and inland watersheds in Mississippi (Saint Louis Bay and Luxapallila, respectively). The testing of hydroshading in these two areas shows that the technique is more effective in areas with moderate topographical relief than in low relief terrain. Combining hydroshading with standard three- dimensional visualization identification of water Hydroshaded DEMs were used to manually delineate Luxapallila and Saint Louis Bay's Wolf River catchments. Delineation results are comparable to output of standard automated delineation produced by GIS software (BASINS).
解析性阴影地形通常用于数字高程模型(dem)的可视化。有时,在确定溪流和分水岭时,可能缺乏未改变的分析阴影地形的质量。水遮阳是一种技术,提供了增强的可视化能力的水文有意义的地形特征。在这项研究中,hydroshading算法应用于NASA的航天飞机雷达地形任务(SRTM) DEM数据集。可视化技术应用于密西西比州的沿海和内陆流域(分别为圣路易斯湾和卢克夏帕利拉)。在这两个地区的水遮阳试验表明,该技术在地形起伏适中的地区比在低起伏地区更有效。将水遮阳与标准的水三维可视化识别相结合,采用水遮阳dem人工圈定了卢夏帕利拉和圣路易斯湾的沃尔夫河流域。圈定结果可与GIS软件(盆地)产生的标准自动圈定结果相媲美。
{"title":"Advanced Techniques for Watershed Visualization","authors":"V. J. Alarcon, C. O'Hara","doi":"10.1109/AIPR.2006.10","DOIUrl":"https://doi.org/10.1109/AIPR.2006.10","url":null,"abstract":"Analytical shaded relief is commonly used for visualization of digital elevation models (DEMs). Sometimes, the quality of unaltered analytical shaded relief can be lacking for identification of streams and water divides. Hydroshading is a technique that provides enhanced capabilities of visualization of hydrologically-meaningful topographical features. In this research, hydroshading algorithms are applied to NASA's Shuttle Radar Topography Mission (SRTM) DEM datasets. The visualization technique is applied to coastal and inland watersheds in Mississippi (Saint Louis Bay and Luxapallila, respectively). The testing of hydroshading in these two areas shows that the technique is more effective in areas with moderate topographical relief than in low relief terrain. Combining hydroshading with standard three- dimensional visualization identification of water Hydroshaded DEMs were used to manually delineate Luxapallila and Saint Louis Bay's Wolf River catchments. Delineation results are comparable to output of standard automated delineation produced by GIS software (BASINS).","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124713852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Automatic Target Classifier using Model Based Image Processing 基于模型图像处理的自动目标分类器
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.12
D. Haanpaa, G. Beach, C. Cohen
A primary mission of air assets is to detect and destroy enemy ground targets. In order to accomplish this mission, it is essential to detect, track, and classify contacts to determine which are valid targets. Traditional combat identification has been performed using all-weather sensors and processing algorithms designed specifically for such sensor data. Electro- optical (EO) sensors produce a very different type of data that does not lend itself to traditional combat identification algorithms. This paper will detail how we analyzed the visual and physical characteristics of a large number of potential targets. The results of this analysis were used to drive the requirements of a demonstration system. We will detail the test data we collected from the military and CAD models for likely targets, as well as overall requirements for system performance.
空中资产的主要任务是探测和摧毁敌方地面目标。为了完成这一任务,必须检测、跟踪和分类接触,以确定哪些是有效目标。传统的作战识别是使用全天候传感器和专门为这种传感器数据设计的处理算法进行的。光电(EO)传感器产生一种非常不同类型的数据,不适合传统的战斗识别算法。本文将详细介绍我们如何分析大量潜在目标的视觉和物理特征。这个分析的结果被用来驱动演示系统的需求。我们将详细介绍从军事和CAD模型中收集的测试数据,以及系统性能的总体需求。
{"title":"An Automatic Target Classifier using Model Based Image Processing","authors":"D. Haanpaa, G. Beach, C. Cohen","doi":"10.1109/AIPR.2006.12","DOIUrl":"https://doi.org/10.1109/AIPR.2006.12","url":null,"abstract":"A primary mission of air assets is to detect and destroy enemy ground targets. In order to accomplish this mission, it is essential to detect, track, and classify contacts to determine which are valid targets. Traditional combat identification has been performed using all-weather sensors and processing algorithms designed specifically for such sensor data. Electro- optical (EO) sensors produce a very different type of data that does not lend itself to traditional combat identification algorithms. This paper will detail how we analyzed the visual and physical characteristics of a large number of potential targets. The results of this analysis were used to drive the requirements of a demonstration system. We will detail the test data we collected from the military and CAD models for likely targets, as well as overall requirements for system performance.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128176364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation Techniques for Image-Based Simulations 基于图像的仿真验证技术
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.39
D. Fraedrich
This paper, describes a series of steps on how to perform model validation and how to interpret the results. The first step is to establish how much error is allowable. If this is not done, then the results of the validation will be very hard to interpret. Once this allowable error is established, the paper describes a procedure for how to perform the validation and analyze the results in a productive way.
本文描述了如何执行模型验证以及如何解释结果的一系列步骤。第一步是确定允许的误差大小。如果不这样做,那么验证的结果将很难解释。一旦确定了这个允许误差,本文描述了如何以有效的方式进行验证和分析结果的程序。
{"title":"Validation Techniques for Image-Based Simulations","authors":"D. Fraedrich","doi":"10.1109/AIPR.2006.39","DOIUrl":"https://doi.org/10.1109/AIPR.2006.39","url":null,"abstract":"This paper, describes a series of steps on how to perform model validation and how to interpret the results. The first step is to establish how much error is allowable. If this is not done, then the results of the validation will be very hard to interpret. Once this allowable error is established, the paper describes a procedure for how to perform the validation and analyze the results in a productive way.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121896924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Adjusted Likelihood Function for Image Analysis 图像分析的鲁棒调整似然函数
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.34
Rong Duan, Wei Jiang, H. Man
Model misspecification has been a major concern in practical model based image analysis. The underlying assumptions of generative processes usually can not exactly describe real-world data samples, which renders the maximum likelihood estimation (MLE) and the Bayesian decision methods unreliable. In this work we study a robust adjusted likelihood (RAL) function that can improve image classification performance under misspecified models. The RAL is calculated by raising the conventional likelihood function to a positive power and multiplying it with a scaling factor. Similar to model parameter estimation, these two new RAL parameters, i.e. the power and the scaling factor, are estimated from the training data using minimum error rate method. In two-category classification case, this RAL is equivalent to a linear discriminant function in log-likelihood space. To demonstrate the effectiveness of this RAL, we first simulate a model misspecification scenario, in which two Rayleigh sources are misspecified as Gaussian distributions. The Gaussian parameters and the RAL parameters are estimated accordingly from the training data, and the two RAL parameters are studied separately. The simulation results show that the Bayes decisions based on maximum-RAL yield higher classification accuracy than the decisions based on conventional maximum-likelihood. We further apply the RAL in automatic target recognition (ATR) of SAR images. Two target classes, i.e. t72 and bmp2, from MSTAR SAR target dataset are used in this study. The target signatures are modeled using Gaussian mixture models (GMMs) with five mixtures for each class. Image classification results again demonstrate a clear advantage of the proposed approach.
在实际的基于模型的图像分析中,模型错配一直是一个主要的问题。生成过程的基本假设通常不能准确地描述真实的数据样本,这使得极大似然估计和贝叶斯决策方法不可靠。在这项工作中,我们研究了一个鲁棒调整似然(RAL)函数,可以提高在错误指定模型下的图像分类性能。通过将常规似然函数提高到正幂并将其与比例因子相乘来计算RAL。与模型参数估计类似,这两个新的RAL参数,即功率和比例因子,使用最小错误率方法从训练数据中估计。在两类分类的情况下,这个RAL等价于对数似然空间中的一个线性判别函数。为了证明该RAL的有效性,我们首先模拟了一个模型错误指定的场景,其中两个瑞利源被错误指定为高斯分布。根据训练数据估计高斯参数和RAL参数,并分别对两个RAL参数进行研究。仿真结果表明,基于极大似然的贝叶斯决策比基于常规极大似然的贝叶斯决策具有更高的分类精度。我们进一步将该算法应用于SAR图像的自动目标识别(ATR)。本研究使用了MSTAR SAR目标数据集中的t72和bmp2两个目标类。目标特征使用高斯混合模型(GMMs)建模,每个类别有五个混合。图像分类结果再次证明了该方法的明显优势。
{"title":"Robust Adjusted Likelihood Function for Image Analysis","authors":"Rong Duan, Wei Jiang, H. Man","doi":"10.1109/AIPR.2006.34","DOIUrl":"https://doi.org/10.1109/AIPR.2006.34","url":null,"abstract":"Model misspecification has been a major concern in practical model based image analysis. The underlying assumptions of generative processes usually can not exactly describe real-world data samples, which renders the maximum likelihood estimation (MLE) and the Bayesian decision methods unreliable. In this work we study a robust adjusted likelihood (RAL) function that can improve image classification performance under misspecified models. The RAL is calculated by raising the conventional likelihood function to a positive power and multiplying it with a scaling factor. Similar to model parameter estimation, these two new RAL parameters, i.e. the power and the scaling factor, are estimated from the training data using minimum error rate method. In two-category classification case, this RAL is equivalent to a linear discriminant function in log-likelihood space. To demonstrate the effectiveness of this RAL, we first simulate a model misspecification scenario, in which two Rayleigh sources are misspecified as Gaussian distributions. The Gaussian parameters and the RAL parameters are estimated accordingly from the training data, and the two RAL parameters are studied separately. The simulation results show that the Bayes decisions based on maximum-RAL yield higher classification accuracy than the decisions based on conventional maximum-likelihood. We further apply the RAL in automatic target recognition (ATR) of SAR images. Two target classes, i.e. t72 and bmp2, from MSTAR SAR target dataset are used in this study. The target signatures are modeled using Gaussian mixture models (GMMs) with five mixtures for each class. Image classification results again demonstrate a clear advantage of the proposed approach.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127192698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1