首页 > 最新文献

35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)最新文献

英文 中文
Anatomically Guided Registration for Multimodal Images 多模态图像的解剖引导配准
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.14
M. Datar, Girish Gopalakrishnan, S. Ranjan, R. Mullick
With an increase in full-body scans and longitudinal acquisitions to track disease progression, it becomes significant to find correspondence between multiple images. One example would be the monitoring size/location of tumors using PET images during chemotherapy to determine treatment progression. While there is a need to go beyond a single parametric transform to recover misalignments, pure deformable solutions become complex, time-consuming and unnecessary at times. Simple anatomically guided approach for whole body image registration offers enhanced alignment of large coverage inter-scan studies. In this experiment, we provide anatomy specific transformations to capture their independent motions. This solution is characterized by an automatic segmentation of regions in the image, followed by a custom registration and volume stitching. We have tested this algorithm on phantom images as well as clinical longitudinal datasets. We were successful in proving that decoupling transformations improves the overall registration quality.
随着全身扫描和追踪疾病进展的纵向采集的增加,找到多个图像之间的对应关系变得非常重要。一个例子是在化疗期间使用PET图像监测肿瘤的大小/位置,以确定治疗进展。虽然需要超越单一参数转换来恢复不对准,但纯粹的可变形解决方案有时会变得复杂,耗时且不必要。简单的解剖引导方法为全身图像配准提供了大范围扫描间研究的增强对齐。在这个实验中,我们提供了解剖学特定的转换来捕捉它们的独立运动。该方案的特点是对图像中的区域进行自动分割,然后进行自定义配准和体拼接。我们已经在幻影图像和临床纵向数据集上测试了这个算法。我们成功地证明了解耦转换提高了整体注册质量。
{"title":"Anatomically Guided Registration for Multimodal Images","authors":"M. Datar, Girish Gopalakrishnan, S. Ranjan, R. Mullick","doi":"10.1109/AIPR.2006.14","DOIUrl":"https://doi.org/10.1109/AIPR.2006.14","url":null,"abstract":"With an increase in full-body scans and longitudinal acquisitions to track disease progression, it becomes significant to find correspondence between multiple images. One example would be the monitoring size/location of tumors using PET images during chemotherapy to determine treatment progression. While there is a need to go beyond a single parametric transform to recover misalignments, pure deformable solutions become complex, time-consuming and unnecessary at times. Simple anatomically guided approach for whole body image registration offers enhanced alignment of large coverage inter-scan studies. In this experiment, we provide anatomy specific transformations to capture their independent motions. This solution is characterized by an automatic segmentation of regions in the image, followed by a custom registration and volume stitching. We have tested this algorithm on phantom images as well as clinical longitudinal datasets. We were successful in proving that decoupling transformations improves the overall registration quality.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134288247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data Level Fusion of Multilook Inverse Synthetic Aperture Radar (ISAR) Images 多视反合成孔径雷达(ISAR)图像的数据级融合
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.21
Zhixi Li, R. Narayanan
Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge base increases allowing more useful target information to be extracted. Physics based image fusion techniques can be developed by processing the raw data collected from multiple ISAR sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule in order to generate a composite image containing increased target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the matrix Fourier transform is created to realize such a complicated integral. This algorithm can be regarded as an exact interpolation, such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected in order to properly register the multiple images before performing the fusion. A comparison of the IAR (Image Attribute Rating) curve between the fused image and the spatial-averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.
虽然单面雷达成像的分辨率增强技术近年来取得了快速进展,但这并不一定意味着这种增强的图像将提高目标识别或识别。然而,当从不同角度获得同一目标的多个外观时,可用知识库增加,从而可以提取更多有用的目标信息。通过处理从多个ISAR传感器收集的原始数据,可以开发基于物理的图像融合技术,即使这些单独的图像具有不同的分辨率。我们推导出合适的数据融合规则,以生成包含更多目标形状特征的合成图像,从而提高目标识别。该规则将具有不同系统参数的多部雷达收集的多个数据集映射到同一空间-频率空间。复合图像可以通过对分离的多个积分区域进行二维傅里叶反变换来重建。一种叫做矩阵傅里叶变换的算法被创造出来来实现这种复杂的积分。该算法可以看作是一种精确的插值,不存在由于数据融合而造成的信息丢失。在进行融合之前,需要仔细选择旋转中心,以便正确地配准多幅图像。融合图像和空间平均图像之间的IAR(图像属性评级)曲线的比较量化了检测到的目标特征的改进。与简单的空间平均算法相比,该技术有了很大的改进,从而提高了目标识别能力。
{"title":"Data Level Fusion of Multilook Inverse Synthetic Aperture Radar (ISAR) Images","authors":"Zhixi Li, R. Narayanan","doi":"10.1109/AIPR.2006.21","DOIUrl":"https://doi.org/10.1109/AIPR.2006.21","url":null,"abstract":"Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge base increases allowing more useful target information to be extracted. Physics based image fusion techniques can be developed by processing the raw data collected from multiple ISAR sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule in order to generate a composite image containing increased target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the matrix Fourier transform is created to realize such a complicated integral. This algorithm can be regarded as an exact interpolation, such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected in order to properly register the multiple images before performing the fusion. A comparison of the IAR (Image Attribute Rating) curve between the fused image and the spatial-averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130093040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
An Adaptive and Non Linear Technique for Enhancement of Extremely High Contrast Images 一种用于超高对比度图像增强的自适应非线性技术
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.11
Saibabu Arigela, V. Asari
In night time surveillance, there is a possibility of having extremely bright and dark regions in some image frames of a video sequence. A novel non linear image enhancement algorithm for digital images captured under such extremely non-uniform lighting conditions is proposed in this paper. The new technique constitutes three processes viz. adaptive intensity enhancement, contrast enhancement and color restoration. Adaptive intensity enhancement uses a specifically designed nonlinear transfer function which is capable of reducing the intensity of bright regions and at the same time enhancing the intensity of dark regions. Contrast enhancement tunes the intensity of each pixels magnitude based on its surrounding pixels. Finally, a linear color restoration process based on the chromatic information of the input image frame is applied to convert the enhanced intensity image back to a color image.
在夜间监控中,在视频序列的某些图像帧中可能存在极其明亮和黑暗的区域。针对这种极端不均匀光照条件下的数字图像,提出了一种新的非线性图像增强算法。新技术包括自适应强度增强、对比度增强和色彩恢复三个过程。自适应强度增强采用专门设计的非线性传递函数,能够降低明亮区域的强度,同时增强黑暗区域的强度。对比度增强根据周围像素调整每个像素大小的强度。最后,基于输入图像帧的颜色信息进行线性色彩恢复处理,将增强后的图像转换回彩色图像。
{"title":"An Adaptive and Non Linear Technique for Enhancement of Extremely High Contrast Images","authors":"Saibabu Arigela, V. Asari","doi":"10.1109/AIPR.2006.11","DOIUrl":"https://doi.org/10.1109/AIPR.2006.11","url":null,"abstract":"In night time surveillance, there is a possibility of having extremely bright and dark regions in some image frames of a video sequence. A novel non linear image enhancement algorithm for digital images captured under such extremely non-uniform lighting conditions is proposed in this paper. The new technique constitutes three processes viz. adaptive intensity enhancement, contrast enhancement and color restoration. Adaptive intensity enhancement uses a specifically designed nonlinear transfer function which is capable of reducing the intensity of bright regions and at the same time enhancing the intensity of dark regions. Contrast enhancement tunes the intensity of each pixels magnitude based on its surrounding pixels. Finally, a linear color restoration process based on the chromatic information of the input image frame is applied to convert the enhanced intensity image back to a color image.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126408353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A Visualization Tool to convey Quantitative in vivo, 3D Knee Joint Kinematics 一个可视化工具来传达定量的在体,三维膝关节运动学
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.8
A. Seisler, F. Sheehan
The overall goal of the virtual functional anatomy (VFA) project is to fill the important knowledge gap that exists in the relationship between functional movement limitations and impaired joint structure or function. Thus, a set of imaging-based post-processing tools is under development to enable dynamic and static magnetic resonance image (MRI) data to be merged. These tools will provide accurate quantification and visualization of 3D static and dynamic properties of musculoskeletal anatomy (i.e. skeletal kinematics, tendon and ligament strain, muscle force, cartilage contact). The current focus is to apply the six-degree of freedom joint kinematics to subject specific models and to quantify dynamic musculoskeletal properties, such as tendon moment arm, muscle moment arms, joint cartilage contact and tendon strain. To date, these tools have been used to study joint function of healthy and impaired (e.g. Cerebral Palsy, ACL rupture and patellar tracking syndrome) joint structures under simulated conditions experienced during activities of daily living.
虚拟功能解剖(VFA)项目的总体目标是填补功能性运动限制与关节结构或功能受损之间关系的重要知识空白。因此,正在开发一套基于成像的后处理工具,使动态和静态磁共振图像(MRI)数据能够合并。这些工具将提供肌肉骨骼解剖(即骨骼运动学,肌腱和韧带张力,肌肉力,软骨接触)的3D静态和动态特性的精确量化和可视化。目前的重点是将六自由度关节运动学应用于受试者特定模型,并量化动态肌肉骨骼特性,如肌腱力矩臂、肌肉力矩臂、关节软骨接触和肌腱应变。迄今为止,这些工具已被用于研究健康和受损(如脑瘫、前交叉韧带破裂和髌骨追踪综合征)关节结构在日常生活活动中模拟条件下的关节功能。
{"title":"A Visualization Tool to convey Quantitative in vivo, 3D Knee Joint Kinematics","authors":"A. Seisler, F. Sheehan","doi":"10.1109/AIPR.2006.8","DOIUrl":"https://doi.org/10.1109/AIPR.2006.8","url":null,"abstract":"The overall goal of the virtual functional anatomy (VFA) project is to fill the important knowledge gap that exists in the relationship between functional movement limitations and impaired joint structure or function. Thus, a set of imaging-based post-processing tools is under development to enable dynamic and static magnetic resonance image (MRI) data to be merged. These tools will provide accurate quantification and visualization of 3D static and dynamic properties of musculoskeletal anatomy (i.e. skeletal kinematics, tendon and ligament strain, muscle force, cartilage contact). The current focus is to apply the six-degree of freedom joint kinematics to subject specific models and to quantify dynamic musculoskeletal properties, such as tendon moment arm, muscle moment arms, joint cartilage contact and tendon strain. To date, these tools have been used to study joint function of healthy and impaired (e.g. Cerebral Palsy, ACL rupture and patellar tracking syndrome) joint structures under simulated conditions experienced during activities of daily living.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131406361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewpoint-Invariant and Illumination-Invariant Classification of Natural Surfaces Using General-Purpose Color and Texture Features with the ALISA dCRC Classifier 基于ALISA dCRC分类器的基于通用颜色和纹理特征的自然表面视点不变和光照不变分类
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.40
Teddy Ko, P. Bock
The paper reports the development of a classifier that can accurately and reliably discriminate among a large number of different natural surfaces in canonical and natural color images regardless of the viewpoint and illumination conditions. To achieve this objective, a set of general-purpose color and texture features were identified as the input to an ALISA statistical learning engine. These general-purpose color and texture features are those which exhibit the least sensitivity to illumination and viewpoint variation in a broad range of applications. To overcome the Bayesian confusion while a large number of test classes are involved, an ALISA deltaCRC classification method is developed. The classifier selects the trained class which has a known reclassification distribution histogram of a training image patch that is most closely matched with the unknown classification distribution of the test image patch. Preliminary results using the CUReT color texture dataset with test images not in the training set yields average classification accuracies well above 95% with no significant associated cost in computation time.
本文报道了一种分类器的开发,该分类器可以在标准和自然彩色图像中准确可靠地区分大量不同的自然表面,而不考虑视点和照明条件。为了实现这一目标,一组通用的颜色和纹理特征被识别为ALISA统计学习引擎的输入。这些通用的颜色和纹理特征在广泛的应用中对照明和视点变化表现出最小的敏感性。为了克服涉及大量测试类时的贝叶斯混淆,提出了一种ALISA deltaCRC分类方法。分类器选择具有已知的训练图像patch的重分类分布直方图,且与未知的测试图像patch的分类分布最匹配的训练类。使用CUReT颜色纹理数据集和未在训练集中的测试图像的初步结果显示,平均分类准确率远高于95%,且没有显著的计算时间成本。
{"title":"Viewpoint-Invariant and Illumination-Invariant Classification of Natural Surfaces Using General-Purpose Color and Texture Features with the ALISA dCRC Classifier","authors":"Teddy Ko, P. Bock","doi":"10.1109/AIPR.2006.40","DOIUrl":"https://doi.org/10.1109/AIPR.2006.40","url":null,"abstract":"The paper reports the development of a classifier that can accurately and reliably discriminate among a large number of different natural surfaces in canonical and natural color images regardless of the viewpoint and illumination conditions. To achieve this objective, a set of general-purpose color and texture features were identified as the input to an ALISA statistical learning engine. These general-purpose color and texture features are those which exhibit the least sensitivity to illumination and viewpoint variation in a broad range of applications. To overcome the Bayesian confusion while a large number of test classes are involved, an ALISA deltaCRC classification method is developed. The classifier selects the trained class which has a known reclassification distribution histogram of a training image patch that is most closely matched with the unknown classification distribution of the test image patch. Preliminary results using the CUReT color texture dataset with test images not in the training set yields average classification accuracies well above 95% with no significant associated cost in computation time.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127907348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D shape estimation and texture generation using texture foreshortening cues 使用纹理缩短线索的3D形状估计和纹理生成
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.6
J. Colombe
The surfaces of 3D objects may be represented as a connected distribution of surface patches that point in various directions with respect to the observer. Viewpoint-normal patches are those whose tangent plane is perpendicular to the line of sight. Foreshortening of surface patches results from their obliquity, with a directional wavelength compression, and an accompanying 1-dimensional stretching of the spatial frequency distribution. This stretching of spatial frequency distributions was used to generate plausible depth illusions via local foreshortening of surface textures rendered from a stretched spatial frequency envelope. Texture foreshortening cues were exploited by a multi-stage image analysis method that revealed local dominant orientation, degree of orientation dominance, relative power in spatial frequencies at a given orientation, and a measure of local surface obliquity, which provides incomplete but useful information in a multi-cue depth estimation framework.
3D物体的表面可以表示为相对于观察者指向不同方向的表面斑块的连接分布。视点法向补丁是切面垂直于视线的补丁。表面斑块的缩短是由于它们的倾角、定向波长压缩以及伴随的空间频率分布的一维拉伸造成的。这种空间频率分布的拉伸被用来通过从拉伸的空间频率包络中渲染的表面纹理的局部缩短来产生似是而非的深度错觉。纹理缩短线索利用多阶段图像分析方法,揭示局部优势方向,方向优势程度,空间频率在给定方向上的相对功率,以及局部表面倾斜度,这为多线索深度估计框架提供了不完整但有用的信息。
{"title":"3D shape estimation and texture generation using texture foreshortening cues","authors":"J. Colombe","doi":"10.1109/AIPR.2006.6","DOIUrl":"https://doi.org/10.1109/AIPR.2006.6","url":null,"abstract":"The surfaces of 3D objects may be represented as a connected distribution of surface patches that point in various directions with respect to the observer. Viewpoint-normal patches are those whose tangent plane is perpendicular to the line of sight. Foreshortening of surface patches results from their obliquity, with a directional wavelength compression, and an accompanying 1-dimensional stretching of the spatial frequency distribution. This stretching of spatial frequency distributions was used to generate plausible depth illusions via local foreshortening of surface textures rendered from a stretched spatial frequency envelope. Texture foreshortening cues were exploited by a multi-stage image analysis method that revealed local dominant orientation, degree of orientation dominance, relative power in spatial frequencies at a given orientation, and a measure of local surface obliquity, which provides incomplete but useful information in a multi-cue depth estimation framework.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128101715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Image Reconstruction and Range-Doppler Tracking with Chirped AM Ladar Data 啁啾调幅雷达数据的三维图像重建和距离-多普勒跟踪
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.5
J. Dammann, B. Redman, W. Ruff
The Army Research Laboratory (ARL) has been developing its patented chirped amplitude modulation (AM) ladar technique for high resolution 3D imaging and range-Doppler tracking. The concept of operation, hardware configurations, and test results for this technique have been presented in detail elsewhere. Heretofore, the signal and image processing techniques used at ARL to reconstruct and display 3D imagery and range-Doppler plots have only been published partially and only in internal reports. In this paper we present the multiple-return range and range- Doppler signal processing algorithms, the model- based "superresolution" processing algorithm for range precision enhancement, and the 3D image reconstruction, processing, and display algorithms, along with representative examples from laboratory and field test data.
美国陆军研究实验室(ARL)正在开发其专利的啁啾调幅(AM)雷达技术,用于高分辨率3D成像和距离多普勒跟踪。该技术的操作概念、硬件配置和测试结果已在其他地方详细介绍。到目前为止,ARL用于重建和显示3D图像和距离多普勒图的信号和图像处理技术仅在内部报告中部分公布。本文介绍了多回波距离和距离多普勒信号处理算法,基于模型的距离精度增强“超分辨率”处理算法,以及三维图像的重建、处理和显示算法,并给出了实验室和现场测试数据的代表性示例。
{"title":"3D Image Reconstruction and Range-Doppler Tracking with Chirped AM Ladar Data","authors":"J. Dammann, B. Redman, W. Ruff","doi":"10.1109/AIPR.2006.5","DOIUrl":"https://doi.org/10.1109/AIPR.2006.5","url":null,"abstract":"The Army Research Laboratory (ARL) has been developing its patented chirped amplitude modulation (AM) ladar technique for high resolution 3D imaging and range-Doppler tracking. The concept of operation, hardware configurations, and test results for this technique have been presented in detail elsewhere. Heretofore, the signal and image processing techniques used at ARL to reconstruct and display 3D imagery and range-Doppler plots have only been published partially and only in internal reports. In this paper we present the multiple-return range and range- Doppler signal processing algorithms, the model- based \"superresolution\" processing algorithm for range precision enhancement, and the 3D image reconstruction, processing, and display algorithms, along with representative examples from laboratory and field test data.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling of Target Shadows for SAR Image Classification SAR图像分类中目标阴影建模
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.27
S. Papson, R. Narayanan
A recent thrust of non-cooperative target recognition (NCTR) using synthetic aperture radar (SAR) has been to complement the extraction of scattering centers by incorporating information contained in the target shadow. When classifying targets based on the shadow region alone, it is essential that an image be well clustered into its respective shadow, highlight, and background regions. To obtain the segmentation, the intensity and spatial location of a pixel are modeled as a mixture of Gaussian distributions. Expectation-maximization (EM) is used to obtain the corresponding distributions for the three regions within a given image. Anisotropic smoothing is applied to smooth the input image as well as the posterior probabilities. A representation of the shadow boundary is developed in conjunction with a Hidden Markov Model (HMM) ensemble to obtain target classification. A variety of targets from the MSTAR database are used to test the performance of both the segmentation algorithm and classification structure.
利用合成孔径雷达(SAR)进行非合作目标识别(NCTR)的最新研究方向是将目标阴影中的信息与散射中心的提取相结合。当仅基于阴影区域对目标进行分类时,必须将图像很好地聚类到各自的阴影、高光和背景区域。为了获得分割,像素的强度和空间位置被建模为高斯分布的混合。期望最大化(EM)方法用于得到给定图像中三个区域的相应分布。各向异性平滑应用于平滑输入图像以及后验概率。结合隐马尔可夫模型(HMM)集成开发了阴影边界的表示,以获得目标分类。使用MSTAR数据库中的各种目标来测试分割算法和分类结构的性能。
{"title":"Modeling of Target Shadows for SAR Image Classification","authors":"S. Papson, R. Narayanan","doi":"10.1109/AIPR.2006.27","DOIUrl":"https://doi.org/10.1109/AIPR.2006.27","url":null,"abstract":"A recent thrust of non-cooperative target recognition (NCTR) using synthetic aperture radar (SAR) has been to complement the extraction of scattering centers by incorporating information contained in the target shadow. When classifying targets based on the shadow region alone, it is essential that an image be well clustered into its respective shadow, highlight, and background regions. To obtain the segmentation, the intensity and spatial location of a pixel are modeled as a mixture of Gaussian distributions. Expectation-maximization (EM) is used to obtain the corresponding distributions for the three regions within a given image. Anisotropic smoothing is applied to smooth the input image as well as the posterior probabilities. A representation of the shadow boundary is developed in conjunction with a Hidden Markov Model (HMM) ensemble to obtain target classification. A variety of targets from the MSTAR database are used to test the performance of both the segmentation algorithm and classification structure.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126448434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Segmentation and Classification of Human Forms using LADAR Data 基于LADAR数据的人体形态分割与分类
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.35
J. Albus, T. Hong, Tommy Chang
High resolution LADAR (laser detection and ranging) images of scenes containing human forms have been automatically segmented and simple algorithms have been developed for recognizing human forms in various positions in both cluttered and uncluttered scenes. Registration of LADAR and color CCD images is suggested as a method to enhance the ability to segment both types of images.
高分辨率的LADAR(激光探测和测距)图像包含人体形态的场景已经被自动分割和简单的算法已经开发用于识别不同位置的人体形态在混乱和非混乱的场景。建议将LADAR和彩色CCD图像配准作为一种增强两种图像分割能力的方法。
{"title":"Segmentation and Classification of Human Forms using LADAR Data","authors":"J. Albus, T. Hong, Tommy Chang","doi":"10.1109/AIPR.2006.35","DOIUrl":"https://doi.org/10.1109/AIPR.2006.35","url":null,"abstract":"High resolution LADAR (laser detection and ranging) images of scenes containing human forms have been automatically segmented and simple algorithms have been developed for recognizing human forms in various positions in both cluttered and uncluttered scenes. Registration of LADAR and color CCD images is suggested as a method to enhance the ability to segment both types of images.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134277235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Nonlinear 3D and 2D Transforms for Image Processing and Surveillance 用于图像处理和监控的非线性三维和二维变换
Pub Date : 2006-10-11 DOI: 10.1109/AIPR.2006.28
Y. Tirat-Gefen
Linear transforms such as bidimensional and tridimensional spatial Fourier transforms for image applications have their limitations due to the uncertainty principle. Also, Fourier transforms allow the existence of negative luminance, which is not physically possible. Wavelet transforms alleviate that through the use of a non-negative wavelet function base, but it still leads to wide spectrum representations. This paper discusses the deployment of new nonlinear methods such as Hilbert-Huang transform for low-cost embedded applications using microprocessors and field programmable gate arrays. Basically, we extract a set of intrinsic mode functions (IMFs), which represent the spectrum of the 3D or 2D scene of a space using these functions as a Hilbert base. Immediate applications for our low cost high performance hardware oriented architecture include image processing for biomedical applications (e.g. pattern recognition and image compression telemedicine) and surveillance.
由于测不准原理的限制,二维和三维空间傅里叶变换等线性变换在图像应用中存在局限性。此外,傅里叶变换允许负亮度的存在,这在物理上是不可能的。小波变换通过使用非负小波函数基缓解了这一点,但它仍然导致宽频谱表示。本文讨论了利用微处理器和现场可编程门阵列在低成本嵌入式应用中部署新的非线性方法,如Hilbert-Huang变换。基本上,我们提取了一组内禀模态函数(IMFs),这些函数代表了一个空间的3D或2D场景的频谱,使用这些函数作为希尔伯特基。我们的低成本高性能硬件导向架构的直接应用包括生物医学应用的图像处理(例如模式识别和图像压缩远程医疗)和监控。
{"title":"Nonlinear 3D and 2D Transforms for Image Processing and Surveillance","authors":"Y. Tirat-Gefen","doi":"10.1109/AIPR.2006.28","DOIUrl":"https://doi.org/10.1109/AIPR.2006.28","url":null,"abstract":"Linear transforms such as bidimensional and tridimensional spatial Fourier transforms for image applications have their limitations due to the uncertainty principle. Also, Fourier transforms allow the existence of negative luminance, which is not physically possible. Wavelet transforms alleviate that through the use of a non-negative wavelet function base, but it still leads to wide spectrum representations. This paper discusses the deployment of new nonlinear methods such as Hilbert-Huang transform for low-cost embedded applications using microprocessors and field programmable gate arrays. Basically, we extract a set of intrinsic mode functions (IMFs), which represent the spectrum of the 3D or 2D scene of a space using these functions as a Hilbert base. Immediate applications for our low cost high performance hardware oriented architecture include image processing for biomedical applications (e.g. pattern recognition and image compression telemedicine) and surveillance.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1