首页 > 最新文献

2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

英文 中文
Learning tree-structured approximations for conditional random fields 学习树结构近似的条件随机场
Pub Date : 2014-11-18 DOI: 10.1109/AIPR.2014.7041937
A. Skurikhin
Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.
精确的概率推理在一般的基于概率图的模型中是难以计算的,例如马尔科夫随机场和条件随机场(CRFs)。我们研究了判别式CRF模型的生成树近似。我们利用一组生成树将原始的包含多个循环的计算难处理的网格结构CRF模型分解为一组可处理的子模型。生成树的结构在原始图的所有生成树中均匀随机生成。这些树被独立学习以解决分类问题,并对每个单独的树进行最大后验边际估计。分类标签是通过对采样生成树上得到的边际进行投票的策略产生的。学习是计算效率高的,因为对树的推断是精确和有效的。我们的目标是通过学习一组随机抽样的无环图来研究用循环信念传播推理逼近原循环图模型的能力。我们关注的是记忆采样树结构的影响。我们比较了两种创建生成树集合的方法,其参数在学习过程中进行了优化:(1)记住学习过程中使用的采样生成树的结构;(2)在学习和重新生成树后不存储采样生成树的结构。在合成图像和真实图像两个数据集上进行了实验。这些数据集主要用于二值图像去噪和人工结构识别。
{"title":"Learning tree-structured approximations for conditional random fields","authors":"A. Skurikhin","doi":"10.1109/AIPR.2014.7041937","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041937","url":null,"abstract":"Exact probabilistic inference is computationally intractable in general probabilistic graph-based models, such as Markov Random Fields and Conditional Random Fields (CRFs). We investigate spanning tree approximations for the discriminative CRF model. We decompose the original computationally intractable grid-structured CRF model containing many cycles into a set of tractable sub-models using a set of spanning trees. The structure of spanning trees is generated uniformly at random among all spanning trees of the original graph. These trees are learned independently to address the classification problem and Maximum Posterior Marginal estimation is performed on each individual tree. Classification labels are produced via voting strategy over the marginals obtained on the sampled spanning trees. The learning is computationally efficient because the inference on trees is exact and efficient. Our objective is to investigate the capability of approximation of the original loopy graph model with loopy belief propagation inference via learning a pool of randomly sampled acyclic graphs. We focus on the impact of memorizing the structure of sampled trees. We compare two approaches to create an ensemble of spanning trees, whose parameters are optimized during learning: (1) memorizing the structure of the sampled spanning trees used during learning and, (2) not storing the structure of the sampled spanning trees after learning and regenerating trees anew. Experiments are done on two image datasets consisting of synthetic and real-world images. These datasets were designed for the tasks of binary image denoising and man-made structure recognition.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114545377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Development of spectropolarimetric imagers for imaging of desert soils 沙漠土壤光谱偏振成像仪的研制
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041908
N. Gupta
There is much interest in imaging of desert soils to understand their mineral composition, grain sizes and orientations for various civilian and military applications. We discuss the development of two novel field-portable spectropolarimetric imagers based on acousto-optic tunable filter (AOTF) technology in the visible near-infrared (VNTR) and shortwave infrared (SWIR) wavelength regions. The first imager covers a spectral region from 450 to 800 nm with a bandwidth of 5 nm at 633 nm and the second from 1000 to 1600 nm with a bandwidth of 15 nm at 1350 nm. These imagers will be used in field tests. In this paper, we discuss salient aspect of spectropolarimetric imager development and present some data collected with them.
人们对沙漠土壤成像很感兴趣,以了解它们的矿物组成、颗粒大小和各种民用和军事应用的方向。本文讨论了两种基于声光可调滤波器(AOTF)技术的新型野外便携式可见光近红外(VNTR)和短波红外(SWIR)波段光谱偏振成像仪的研制。第一个成像仪覆盖450 ~ 800 nm的光谱区域,在633 nm处带宽为5 nm;第二个成像仪覆盖1000 ~ 1600 nm的光谱区域,在1350 nm处带宽为15 nm。这些成像仪将用于现场试验。本文讨论了光谱偏振成像仪发展的一些重要方面,并介绍了它们收集的一些数据。
{"title":"Development of spectropolarimetric imagers for imaging of desert soils","authors":"N. Gupta","doi":"10.1109/AIPR.2014.7041908","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041908","url":null,"abstract":"There is much interest in imaging of desert soils to understand their mineral composition, grain sizes and orientations for various civilian and military applications. We discuss the development of two novel field-portable spectropolarimetric imagers based on acousto-optic tunable filter (AOTF) technology in the visible near-infrared (VNTR) and shortwave infrared (SWIR) wavelength regions. The first imager covers a spectral region from 450 to 800 nm with a bandwidth of 5 nm at 633 nm and the second from 1000 to 1600 nm with a bandwidth of 15 nm at 1350 nm. These imagers will be used in field tests. In this paper, we discuss salient aspect of spectropolarimetric imager development and present some data collected with them.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123164480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Large displacement optical flow based image predictor model 基于大位移光流的图像预测模型
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041943
N. Verma, Aakansha Mishra
This paper proposes a Large Displacement Optical Flow based Image Predictor Model for generating future image frames by applying past and present image frames. The predictor model is an Artificial Neural Network (ANN) and Radial Basis Function Neural Network (RBFNN) Model whose input set of data is horizontal and vertical components of velocities estimated using Large Displacement Optical Flow for every pixel intensity in a given image sequence. There has been a significant amount of research in the past to generate future image frames for a given set of image frames. The quality of generated images is evaluated by Canny's edge detection Index Metric (CIM) and Mean Structure Similarity Index Metric (MSSIM). For our proposed algorithm, CIM and MSSIM indices for all the future generated images are found better when compared with the most recent existing algorithms for future image frame generation. The objective of this study is to develop a generalized framework that can predict future image frames for any given image sequence with large displacements of objects. In this paper, we have validated our developed Image Predictor Model on an image sequence of landing jet fighter and obtained performance indices are found better as compared to most recent existing image predictor models.
本文提出了一种基于大位移光流的图像预测模型,通过应用过去和现在的图像帧来生成未来的图像帧。预测模型是一个人工神经网络(ANN)和径向基函数神经网络(RBFNN)模型,其输入数据集是大位移光流对给定图像序列中每个像素强度估计的速度的水平和垂直分量。过去已经有大量的研究为一组给定的图像帧生成未来的图像帧。通过Canny边缘检测指标(CIM)和平均结构相似度指标(MSSIM)对生成的图像质量进行评价。对于我们提出的算法,与最新的现有算法相比,发现所有未来生成的图像的CIM和MSSIM索引更好。本研究的目的是开发一种通用框架,可以预测任何给定图像序列中具有大位移的物体的未来图像帧。在本文中,我们对所开发的图像预测模型进行了着陆喷气式战斗机图像序列的验证,与现有的图像预测模型相比,所获得的性能指标更好。
{"title":"Large displacement optical flow based image predictor model","authors":"N. Verma, Aakansha Mishra","doi":"10.1109/AIPR.2014.7041943","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041943","url":null,"abstract":"This paper proposes a Large Displacement Optical Flow based Image Predictor Model for generating future image frames by applying past and present image frames. The predictor model is an Artificial Neural Network (ANN) and Radial Basis Function Neural Network (RBFNN) Model whose input set of data is horizontal and vertical components of velocities estimated using Large Displacement Optical Flow for every pixel intensity in a given image sequence. There has been a significant amount of research in the past to generate future image frames for a given set of image frames. The quality of generated images is evaluated by Canny's edge detection Index Metric (CIM) and Mean Structure Similarity Index Metric (MSSIM). For our proposed algorithm, CIM and MSSIM indices for all the future generated images are found better when compared with the most recent existing algorithms for future image frame generation. The objective of this study is to develop a generalized framework that can predict future image frames for any given image sequence with large displacements of objects. In this paper, we have validated our developed Image Predictor Model on an image sequence of landing jet fighter and obtained performance indices are found better as compared to most recent existing image predictor models.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A 3D pointcloud registration algorithm based on fast coherent point drift 基于快速相干点漂移的三维点云配准算法
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041917
Min Lu, Jian Zhao, Yulan Guo, Jianping Ou, Jonathan Li
Pointcloud registration has a number of applications in various research areas. Computational complexity and accuracy are two major concerns for a pointcloud registration algorithm. This paper proposes a novel Fast Coherent Point Drift (F-CPD) algorithm for 3D pointcloud registration. The original CPD method is very time-consuming. The situation becomes even worse when the number of points is large. In order to overcome the limitations of the original CPD algorithm, a global convergent squared iterative expectation maximization (gSQUAREM) scheme is proposed. The gSQUAREM scheme uses an iterative strategy to estimate the transformations and correspondences between two pointclouds. Experimental results on a synthetic dataset show that the proposed algorithm outperforms the original CPD algorithm and the Iterative Closest Point (ICP) algorithm in terms of both registration accuracy and convergence rate.
点云配准在不同的研究领域有许多应用。计算复杂度和精度是点云配准算法的两个主要问题。提出了一种新的三维点云配准快速相干点漂移(F-CPD)算法。原来的CPD方法非常耗时。当点数多的时候,情况就更糟了。为了克服原有CPD算法的局限性,提出了一种全局收敛的平方迭代期望最大化(gSQUAREM)算法。gSQUAREM方案使用迭代策略来估计两个点云之间的转换和对应关系。在一个合成数据集上的实验结果表明,该算法在配准精度和收敛速度上都优于原始的CPD算法和迭代最近点(ICP)算法。
{"title":"A 3D pointcloud registration algorithm based on fast coherent point drift","authors":"Min Lu, Jian Zhao, Yulan Guo, Jianping Ou, Jonathan Li","doi":"10.1109/AIPR.2014.7041917","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041917","url":null,"abstract":"Pointcloud registration has a number of applications in various research areas. Computational complexity and accuracy are two major concerns for a pointcloud registration algorithm. This paper proposes a novel Fast Coherent Point Drift (F-CPD) algorithm for 3D pointcloud registration. The original CPD method is very time-consuming. The situation becomes even worse when the number of points is large. In order to overcome the limitations of the original CPD algorithm, a global convergent squared iterative expectation maximization (gSQUAREM) scheme is proposed. The gSQUAREM scheme uses an iterative strategy to estimate the transformations and correspondences between two pointclouds. Experimental results on a synthetic dataset show that the proposed algorithm outperforms the original CPD algorithm and the Iterative Closest Point (ICP) algorithm in terms of both registration accuracy and convergence rate.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130668919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Mathematical model and experimental methodology for calibration of a LWIR polarimetric-hyperspectral imager LWIR偏振-高光谱成像仪定标的数学模型和实验方法
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041909
Joel G. Holder, Jacob A. Martin, K. Gross
Polarimetric-hyperspectral imaging brings two traditionally independent modalities together to potentially enhance scene characterization capabilities. This could increase confidence in target detection, material identification, and background characterization over traditional hyperspectral imaging. In order to fully exploit the spectro-polarimetric signal, a careful calibration process is required to remove both the radiometric and polarimetric response of the system (gain). In the long-wave infrared, calibration is further complicated by the polarized self-emission of the instrument itself (offset). This paper presents both the mathematical framework and the experimental methodology for the spectro-polarimetric calibration of a long-wave infrared (LWIR) Telops Hyper-Cam which has been modified with a rotatable wire-grid polarizer at the entrance aperture. The mathematical framework is developed using a Mueller matrix approach to model the polarimetric effects of the system, and this is combined with a standard Fourier-transform spectrometer (FTS) radiometric calibration framework. This is done for two cases: one assuming that the instrument polarizer is ideal, and a second method which accounts for a non-ideal instrument polarizer. It is shown that a standard two-point radiometric calibration at each instrument polarizer angle is sufficient to remove the polarimetric bias of the instrument, if the instrument polarizer can be assumed to be ideal. For the non-ideal polarizer case, the system matrix and the Mueller deviation matrix is experimentally determined for the system, and used to quantify how non-ideal the system is. The noise-equivalent spectral radiance and DoLP are also quantified using a wide-area blackbody. Finally, a scene with a variety of features in it is imaged and analyzed.
偏振-高光谱成像将两种传统上独立的模式结合在一起,潜在地增强了场景表征能力。与传统的高光谱成像相比,这可以增加目标检测、材料识别和背景表征的信心。为了充分利用光谱偏振信号,需要仔细的校准过程来消除系统的辐射和偏振响应(增益)。在长波红外中,由于仪器本身的偏振自发射(偏移),校准变得更加复杂。本文给出了长波红外(LWIR) Telops Hyper-Cam的光谱偏振定标的数学框架和实验方法,该定标是在长波红外(LWIR)的入口孔径处安装了可旋转的线栅偏振器。使用Mueller矩阵方法建立了数学框架,以模拟系统的极化效应,并将其与标准傅立叶变换光谱仪(FTS)辐射校准框架相结合。这是在两种情况下完成的:一种假设仪器偏光片是理想的,第二种方法是考虑非理想仪器偏光片。结果表明,如果仪器偏振片可以假设为理想的,则在每个仪器偏振片角度上进行标准两点辐射校准就足以消除仪器的偏振片偏置。对于非理想偏振器,系统矩阵和穆勒偏差矩阵是实验确定的系统,并用于量化系统的非理想程度。噪声等效光谱辐射和DoLP也用广域黑体进行了量化。最后,对一个具有多种特征的场景进行成像和分析。
{"title":"Mathematical model and experimental methodology for calibration of a LWIR polarimetric-hyperspectral imager","authors":"Joel G. Holder, Jacob A. Martin, K. Gross","doi":"10.1109/AIPR.2014.7041909","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041909","url":null,"abstract":"Polarimetric-hyperspectral imaging brings two traditionally independent modalities together to potentially enhance scene characterization capabilities. This could increase confidence in target detection, material identification, and background characterization over traditional hyperspectral imaging. In order to fully exploit the spectro-polarimetric signal, a careful calibration process is required to remove both the radiometric and polarimetric response of the system (gain). In the long-wave infrared, calibration is further complicated by the polarized self-emission of the instrument itself (offset). This paper presents both the mathematical framework and the experimental methodology for the spectro-polarimetric calibration of a long-wave infrared (LWIR) Telops Hyper-Cam which has been modified with a rotatable wire-grid polarizer at the entrance aperture. The mathematical framework is developed using a Mueller matrix approach to model the polarimetric effects of the system, and this is combined with a standard Fourier-transform spectrometer (FTS) radiometric calibration framework. This is done for two cases: one assuming that the instrument polarizer is ideal, and a second method which accounts for a non-ideal instrument polarizer. It is shown that a standard two-point radiometric calibration at each instrument polarizer angle is sufficient to remove the polarimetric bias of the instrument, if the instrument polarizer can be assumed to be ideal. For the non-ideal polarizer case, the system matrix and the Mueller deviation matrix is experimentally determined for the system, and used to quantify how non-ideal the system is. The noise-equivalent spectral radiance and DoLP are also quantified using a wide-area blackbody. Finally, a scene with a variety of features in it is imaged and analyzed.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116009902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
KWIVER: An open source cross-platform video exploitation framework KWIVER:一个开源的跨平台视频开发框架
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041910
Keith Fieldhouse, Matthew J. Leotta, Arslan Basharat, Russell Blue, David Stoup, Chuck Atkins, Linus Sherrill, B. Boeckel, Paul Tunison, Jacob Becker, Matthew Dawkins, Matthew Woehlke, Roderic Collins, M. Turek, A. Hoogs
We introduce KWIVER, a cross-platform video exploitation framework that Kitware has begun releasing as open source. Kitware is utilizing a multi-tiered open-source approach to reach as wide an audience as possible. Kitware's government-funded efforts to develop critical defense technology will be released back to the defense community via Forge.mil, a government open source repository. Infrastructure, algorithms, and systems without release restrictions will be provided to the larger video analytics community via kwiver.org and GitHub. Our goal is to provide a video analytics technology baseline for repeatable and reproducible experiments and to serve as a framework for the development of computer vision and machine learning systems. We hope that KWIVER will provide a focal point for collaboration and contributions from groups across the community.
我们介绍KWIVER,一个跨平台的视频开发框架,Kitware已经开始作为开源发布。Kitware利用多层次的开源方法来接触尽可能广泛的受众。Kitware在政府资助下开发的关键国防技术将通过政府开源存储库Forge.mil发布给国防社区。没有发布限制的基础设施、算法和系统将通过kwiver.org和GitHub提供给更大的视频分析社区。我们的目标是为可重复和可再现的实验提供一个视频分析技术基线,并作为计算机视觉和机器学习系统开发的框架。我们希望KWIVER将为整个社区的团体提供一个协作和贡献的焦点。
{"title":"KWIVER: An open source cross-platform video exploitation framework","authors":"Keith Fieldhouse, Matthew J. Leotta, Arslan Basharat, Russell Blue, David Stoup, Chuck Atkins, Linus Sherrill, B. Boeckel, Paul Tunison, Jacob Becker, Matthew Dawkins, Matthew Woehlke, Roderic Collins, M. Turek, A. Hoogs","doi":"10.1109/AIPR.2014.7041910","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041910","url":null,"abstract":"We introduce KWIVER, a cross-platform video exploitation framework that Kitware has begun releasing as open source. Kitware is utilizing a multi-tiered open-source approach to reach as wide an audience as possible. Kitware's government-funded efforts to develop critical defense technology will be released back to the defense community via Forge.mil, a government open source repository. Infrastructure, algorithms, and systems without release restrictions will be provided to the larger video analytics community via kwiver.org and GitHub. Our goal is to provide a video analytics technology baseline for repeatable and reproducible experiments and to serve as a framework for the development of computer vision and machine learning systems. We hope that KWIVER will provide a focal point for collaboration and contributions from groups across the community.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116685259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Enhanced view invariant gait recognition using feature level fusion 基于特征融合的增强视觉不变步态识别
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041942
H. Chaubey, M. Hanmandlu, S. Vasikarla
In this paper, following the model-free approach for gait image representation, an individual recognition system is developed using the Gait Energy Image (GEI) templates. The GEI templates can easily be obtained from an image sequence of a walking person. Low dimensional feature vectors are extracted from the GEI templates using Principal Component Analysis (PCA) and Multiple Discriminant Analysis (MDA), followed by the nearest neighbor classification for recognition. Genuine and imposter scores are computed to draw the Receiver Operating Characteristics (ROC). In practical scenarios, the viewing angles of gallery data and probe data may not be the same. To tackle such difficulties, View Transformation Model (VTM) is developed using Singular Value Decomposition (SVD). The gallery data at a different viewing angle are transformed to the viewing angle of probe data using the View Transformation Model. This paper attempts to enhance the overall recognition rate by an efficient method of fusion of the features which are transformed from other viewing angles to that of probe data. Experimental results show that fusion of view transformed features enhances the overall performance of the recognition system.
本文采用无模型的步态图像表示方法,利用步态能量图像(GEI)模板开发了个体识别系统。GEI模板可以很容易地从行走的人的图像序列中获得。利用主成分分析(PCA)和多元判别分析(MDA)从GEI模板中提取低维特征向量,然后进行最近邻分类进行识别。计算真品和冒牌货分数以绘制接受者工作特征(ROC)。在实际场景中,画廊数据和探头数据的视角可能不相同。为了解决这些问题,利用奇异值分解(SVD)建立了视图转换模型(VTM)。利用视图转换模型将不同视角下的图库数据转换为探头数据的视角。本文试图通过一种有效的方法,将其他视角变换后的特征与探测数据的特征融合,以提高整体识别率。实验结果表明,图像特征的融合提高了识别系统的整体性能。
{"title":"Enhanced view invariant gait recognition using feature level fusion","authors":"H. Chaubey, M. Hanmandlu, S. Vasikarla","doi":"10.1109/AIPR.2014.7041942","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041942","url":null,"abstract":"In this paper, following the model-free approach for gait image representation, an individual recognition system is developed using the Gait Energy Image (GEI) templates. The GEI templates can easily be obtained from an image sequence of a walking person. Low dimensional feature vectors are extracted from the GEI templates using Principal Component Analysis (PCA) and Multiple Discriminant Analysis (MDA), followed by the nearest neighbor classification for recognition. Genuine and imposter scores are computed to draw the Receiver Operating Characteristics (ROC). In practical scenarios, the viewing angles of gallery data and probe data may not be the same. To tackle such difficulties, View Transformation Model (VTM) is developed using Singular Value Decomposition (SVD). The gallery data at a different viewing angle are transformed to the viewing angle of probe data using the View Transformation Model. This paper attempts to enhance the overall recognition rate by an efficient method of fusion of the features which are transformed from other viewing angles to that of probe data. Experimental results show that fusion of view transformed features enhances the overall performance of the recognition system.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121518042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A comparative study of methods to solve the watchman route problem in a photon mapping-illuminated 3D virtual environment 基于光子映射的三维虚拟环境中哨兵路径问题求解方法的比较研究
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041913
B. A. Johnson, J. Isaacs, H. Qi
Understanding where to place static sensors such that the amount of information gained is maximized while the number of sensors used to obtain that information is minimized is an instance of solving the NP-hard art gallery problem (AGP). A closely-related problem is the watchman route problem (WRP) which seeks to plan an optimal route by an unmanned vehicle (UV) or multiple UVs such that the amount of information gained is maximized while the distance traveled to gain that information is minimized. In order to solve the WRP, we present the Photon-mapping-informed active-Contour Route Designator (PICRD) algorithm. PICRD heuristically solves the WRP by selecting AGP-solving vertices and connecting them with vertices provided by a 3D mesh generated by a photon-mapping informed segmentation algorithm using some shortest-route path-finding algorithm. Since we are using photon-mapping as our foundation for determining UV-sensor coverage by the PICRD algorithm, we can then take into account the behavior of photons as they propagate through the various environmental conditions that might be encountered by a single or multiple UVs. Furthermore, since we are being agnostic with regard to the segmentation algorithm used to create our WRP-solving mesh, we can adjust the segmentation algorithm used in order to accommodate different environmental and computational circumstances. In this paper, we demonstrate how to adapt our methods to solve the WRP for single and multiple UVs using PICRD using two different segmentation algorithms under varying virtual environmental conditions.
了解在哪里放置静态传感器,以便获得的信息量最大化,同时用于获取信息的传感器数量最小化,这是解决NP-hard art gallery问题(AGP)的一个实例。一个密切相关的问题是看守路线问题(WRP),该问题寻求通过无人驾驶车辆(UV)或多个UV规划最佳路线,使获得的信息量最大化,而获得信息的距离最小。为了解决这一问题,我们提出了一种基于光子映射的主动轮廓路线指示器(PICRD)算法。PICRD是一种启发式求解WRP的方法,它选择可求解agp的顶点,并使用最短路径寻路算法将其与光子映射分割算法生成的三维网格提供的顶点连接起来。由于我们使用光子映射作为通过PICRD算法确定uv传感器覆盖范围的基础,因此我们可以考虑光子在单个或多个uv可能遇到的各种环境条件下传播时的行为。此外,由于我们对用于创建我们的wrp求解网格的分割算法是不可知的,我们可以调整所使用的分割算法,以适应不同的环境和计算情况。在本文中,我们演示了如何在不同的虚拟环境条件下使用两种不同的分割算法来适应我们的方法来解决单个和多个使用PICRD的uv的WRP。
{"title":"A comparative study of methods to solve the watchman route problem in a photon mapping-illuminated 3D virtual environment","authors":"B. A. Johnson, J. Isaacs, H. Qi","doi":"10.1109/AIPR.2014.7041913","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041913","url":null,"abstract":"Understanding where to place static sensors such that the amount of information gained is maximized while the number of sensors used to obtain that information is minimized is an instance of solving the NP-hard art gallery problem (AGP). A closely-related problem is the watchman route problem (WRP) which seeks to plan an optimal route by an unmanned vehicle (UV) or multiple UVs such that the amount of information gained is maximized while the distance traveled to gain that information is minimized. In order to solve the WRP, we present the Photon-mapping-informed active-Contour Route Designator (PICRD) algorithm. PICRD heuristically solves the WRP by selecting AGP-solving vertices and connecting them with vertices provided by a 3D mesh generated by a photon-mapping informed segmentation algorithm using some shortest-route path-finding algorithm. Since we are using photon-mapping as our foundation for determining UV-sensor coverage by the PICRD algorithm, we can then take into account the behavior of photons as they propagate through the various environmental conditions that might be encountered by a single or multiple UVs. Furthermore, since we are being agnostic with regard to the segmentation algorithm used to create our WRP-solving mesh, we can adjust the segmentation algorithm used in order to accommodate different environmental and computational circumstances. In this paper, we demonstrate how to adapt our methods to solve the WRP for single and multiple UVs using PICRD using two different segmentation algorithms under varying virtual environmental conditions.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131388130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Secret communication in colored images using saliency map as model 以显著性图为模型的彩色图像秘密通信
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041919
Manish Mahajan, Navdeep Kaur
Steganography is a process that involves hiding a message in an appropriate carrier for example an image or an audio file. Many algorithms have been proposed for this purpose in spatial & frequency domain. But in almost all the algorithms it has been noticed that as one embeds the secret data in the image certain characteristics or statistics of the image get disturbed. To deal with this problem another paradigm named as adaptive steganography exists which is based upon some mathematical model. Visual system of human beings does not process the complete area of image rather focus upon limited area of visual image. But in which area does the visual attention focused is a topic of hot research nowadays. Research on psychological phenomenon indicates that attention is attracted to features that differ from its surroundings or the one that are unusual or unfamiliar to the human visual system. Object or region based image processing can be performed more efficiently with information pertaining locations that are visually salient to human perception with the aid of a saliency map. So saliency map may act as model for adaptive steganography in images. Keeping this in view, a novel steganography technique based upon saliency map has been proposed in this work.
隐写术是一种将信息隐藏在适当载体(例如图像或音频文件)中的过程。为此,在空间和频域已经提出了许多算法。但是,在几乎所有的算法中,人们都注意到,当在图像中嵌入秘密数据时,图像的某些特征或统计量会受到干扰。为了解决这一问题,存在一种基于数学模型的自适应隐写范式。人类的视觉系统不是处理图像的整个区域,而是集中处理视觉图像的有限区域。但是,视觉注意力集中在哪个区域是目前研究的热点。对心理现象的研究表明,人们的注意力会被与周围环境不同的特征所吸引,或者被人类视觉系统不寻常或不熟悉的特征所吸引。借助显著性地图,可以更有效地执行基于对象或区域的图像处理,这些信息与人类感知的视觉显著位置有关。可见性图可以作为图像自适应隐写的模型。有鉴于此,本文提出了一种基于显著性图的隐写技术。
{"title":"Secret communication in colored images using saliency map as model","authors":"Manish Mahajan, Navdeep Kaur","doi":"10.1109/AIPR.2014.7041919","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041919","url":null,"abstract":"Steganography is a process that involves hiding a message in an appropriate carrier for example an image or an audio file. Many algorithms have been proposed for this purpose in spatial & frequency domain. But in almost all the algorithms it has been noticed that as one embeds the secret data in the image certain characteristics or statistics of the image get disturbed. To deal with this problem another paradigm named as adaptive steganography exists which is based upon some mathematical model. Visual system of human beings does not process the complete area of image rather focus upon limited area of visual image. But in which area does the visual attention focused is a topic of hot research nowadays. Research on psychological phenomenon indicates that attention is attracted to features that differ from its surroundings or the one that are unusual or unfamiliar to the human visual system. Object or region based image processing can be performed more efficiently with information pertaining locations that are visually salient to human perception with the aid of a saliency map. So saliency map may act as model for adaptive steganography in images. Keeping this in view, a novel steganography technique based upon saliency map has been proposed in this work.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132668295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modified deconvolution using wavelet image fusion 改进的小波图像融合反卷积
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041900
Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas
Image quality is affected by two predominant factors, noise and blur. Blur typically manifests itself as a smoothing of edges, and can be described as the convolution of an image with an unknown blur kernel. The inverse of convolution is deconvolution, a difficult process even in the absence of noise, which aims to recover the true image. Removing blur from an image has two stages: identifying or approximating the blur kernel, then performing a deconvolution of the estimated kernel and blurred image. Blur removal is often an iterative process, with successive approximations of the kernel leading to optimal results. However, it is unlikely that a given image is blurred uniformly. In real world situations most images are already blurred due to object motion or camera motion/de focus. Deconvolution, a computationally expensive process, will sharpen blurred regions, but can also degrade the regions previously unaffected by blur. To remedy the limitations of blur deconvolution, we propose a novel, modified deconvolution, using wavelet image fusion (moDuWIF), to remove blur from a no-reference image. First, we estimate the blur kernel, and then we perform a deconvolution. Finally, wavelet techniques are implemented to fuse the blurred and deblurred images. The details in the blurred image that are lost by deconvolution are recovered, and the sharpened features in the deblurred image are retained. The proposed technique is evaluated using several metrics and compared to standard approaches. Our results show that this approach has potential applications to many fields, including: medical imaging, topography, and computer vision.
影响图像质量的两个主要因素是噪声和模糊。模糊通常表现为边缘的平滑,并且可以描述为带有未知模糊核的图像的卷积。卷积的逆过程是反卷积,即使在没有噪声的情况下,反卷积也是一个困难的过程,其目的是恢复真实图像。从图像中去除模糊有两个阶段:识别或逼近模糊核,然后对估计的核和模糊图像进行反卷积。模糊去除通常是一个迭代过程,通过对核的连续逼近来获得最佳结果。然而,给定的图像不太可能被均匀模糊。在现实世界中,由于物体运动或相机运动/失焦,大多数图像已经模糊了。反卷积是一个计算成本很高的过程,它会锐化模糊区域,但也会降低以前未受模糊影响的区域。为了弥补模糊反卷积的局限性,我们提出了一种新的,改进的反卷积,使用小波图像融合(moDuWIF),从无参考图像中去除模糊。首先,我们估计模糊核,然后我们执行反卷积。最后,利用小波技术对模糊图像和去模糊图像进行融合。在恢复去卷积图像中丢失的细节的同时,保留去模糊图像中锐化的特征。所提出的技术使用几个指标进行评估,并与标准方法进行比较。我们的研究结果表明,这种方法在许多领域都有潜在的应用,包括:医学成像、地形学和计算机视觉。
{"title":"Modified deconvolution using wavelet image fusion","authors":"Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas","doi":"10.1109/AIPR.2014.7041900","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041900","url":null,"abstract":"Image quality is affected by two predominant factors, noise and blur. Blur typically manifests itself as a smoothing of edges, and can be described as the convolution of an image with an unknown blur kernel. The inverse of convolution is deconvolution, a difficult process even in the absence of noise, which aims to recover the true image. Removing blur from an image has two stages: identifying or approximating the blur kernel, then performing a deconvolution of the estimated kernel and blurred image. Blur removal is often an iterative process, with successive approximations of the kernel leading to optimal results. However, it is unlikely that a given image is blurred uniformly. In real world situations most images are already blurred due to object motion or camera motion/de focus. Deconvolution, a computationally expensive process, will sharpen blurred regions, but can also degrade the regions previously unaffected by blur. To remedy the limitations of blur deconvolution, we propose a novel, modified deconvolution, using wavelet image fusion (moDuWIF), to remove blur from a no-reference image. First, we estimate the blur kernel, and then we perform a deconvolution. Finally, wavelet techniques are implemented to fuse the blurred and deblurred images. The details in the blurred image that are lost by deconvolution are recovered, and the sharpened features in the deblurred image are retained. The proposed technique is evaluated using several metrics and compared to standard approaches. Our results show that this approach has potential applications to many fields, including: medical imaging, topography, and computer vision.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133041071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1