首页 > 最新文献

2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

英文 中文
Machine learning nuclear detonation features 机器学习核爆特征
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041936
Daniel T. Schmitt, Gilbert L. Peterson
Nuclear explosion yield estimation equations based on a 3D model of the explosion volume will have a lower uncertainty than radius based estimation. To accurately collect data for a volume model of atmospheric explosions requires building a 3D representation from 2D images. The majority of 3D reconstruction algorithms use the SIFT (scale-invariant feature transform) feature detection algorithm which works best on feature-rich objects with continuous angular collections. These assumptions are different from the archive of nuclear explosions that have only 3 points of view. This paper reduces 300 dimensions derived from an image based on Fourier analysis and five edge detection algorithms to a manageable number to detect hotspots that may be used to correlate videos of different viewpoints for 3D reconstruction. Furthermore, experiments test whether histogram equalization improves detection of these features using four kernel sizes passed over these features. Dimension reduction using principal components analysis (PCA), forward subset selection, ReliefF, and FCBF (Fast Correlation-Based Filter) are combined with a Mahalanobis distance classifiers to find the best combination of dimensions, kernel size, and filtering to detect the hotspots. Results indicate that hotspots can be detected with hit rates of 90% and false alarms i 1%.
基于爆炸体积三维模型的核爆炸当量估算方程比基于半径的估算具有更低的不确定性。为了准确地收集大气爆炸体积模型的数据,需要从2D图像中构建3D表示。大多数三维重建算法使用SIFT(尺度不变特征变换)特征检测算法,该算法在具有连续角度集合的特征丰富的对象上效果最好。这些假设不同于只有3个观点的核爆炸档案。本文将基于傅里叶分析和五种边缘检测算法的图像派生的300维降至可管理的数量,以检测可用于关联不同视点视频进行3D重建的热点。此外,实验测试了直方图均衡化是否提高了这些特征的检测,使用四个内核大小传递这些特征。使用主成分分析(PCA)降维、前向子集选择、ReliefF和FCBF (Fast Correlation-Based Filter)与Mahalanobis距离分类器相结合,找到维度、核大小和过滤的最佳组合,以检测热点。结果表明,检测热点的准确率为90%,误报率为1%。
{"title":"Machine learning nuclear detonation features","authors":"Daniel T. Schmitt, Gilbert L. Peterson","doi":"10.1109/AIPR.2014.7041936","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041936","url":null,"abstract":"Nuclear explosion yield estimation equations based on a 3D model of the explosion volume will have a lower uncertainty than radius based estimation. To accurately collect data for a volume model of atmospheric explosions requires building a 3D representation from 2D images. The majority of 3D reconstruction algorithms use the SIFT (scale-invariant feature transform) feature detection algorithm which works best on feature-rich objects with continuous angular collections. These assumptions are different from the archive of nuclear explosions that have only 3 points of view. This paper reduces 300 dimensions derived from an image based on Fourier analysis and five edge detection algorithms to a manageable number to detect hotspots that may be used to correlate videos of different viewpoints for 3D reconstruction. Furthermore, experiments test whether histogram equalization improves detection of these features using four kernel sizes passed over these features. Dimension reduction using principal components analysis (PCA), forward subset selection, ReliefF, and FCBF (Fast Correlation-Based Filter) are combined with a Mahalanobis distance classifiers to find the best combination of dimensions, kernel size, and filtering to detect the hotspots. Results indicate that hotspots can be detected with hit rates of 90% and false alarms i 1%.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132418467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Images don't forget: Online photogrammetry to find lost graves 图片不要忘记:在线摄影测量找到丢失的坟墓
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041939
Abby Stylianou, Joseph D. O'Sullivan, Austin Abrams, Robert Pless
The vast amount of public photographic data posted and shared on Facebook, Instragram and other forms of social media offers an unprecedented visual archive of the world. This archive captures events ranging from birthdays, trips, and graduations to lethal conflicts and human rights violations. Because this data is public, it has led to a new genre of journalism, one led by citizens finding, analyzing, and synthesizing data into stories that describe important events. To support this, we have built a set of browser-based tools for the calibration and validation of online images. This paper presents these tools in the context of their use in finding two separate lost burial locations. Often, these locations would have been marked with a headstone or tomb, but for the very poor, the forgotten, or the victims of extremist violence buried in unmarked graves, the geometric cues present in a photograph may contain the most reliable information about the burial location. The tools described in this paper allow individuals without any significant geometry background to utilize those cues to locate these lost graves, or any other outdoor image with sufficient correspondences to the physical world. We highlight the difficulties that arise due to geometric inconsistencies between corresponding points, especially when significant changes have occurred in the physical world since the photo was taken, and visualization features on our browser-based tools that help users address this.
在Facebook、instagram和其他形式的社交媒体上发布和分享的大量公共照片数据,提供了一个前所未有的世界视觉档案。这个档案记录了从生日、旅行、毕业到致命冲突和侵犯人权的事件。因为这些数据是公开的,它导致了一种新的新闻类型,一种由公民发现、分析和综合数据成故事来描述重要事件的新闻类型。为了支持这一点,我们建立了一套基于浏览器的工具来校准和验证在线图像。本文介绍了这些工具在寻找两个独立的失落的埋葬地点时的使用情况。通常,这些地点会有墓碑或坟墓,但对于那些非常贫穷,被遗忘的人,或者被极端主义暴力埋葬在没有标记的坟墓中的受害者来说,照片中的几何线索可能包含有关埋葬地点的最可靠信息。本文中描述的工具允许没有任何重要几何背景的个人利用这些线索来定位这些丢失的坟墓,或任何其他与物理世界充分对应的户外图像。我们强调了由于对应点之间的几何不一致而产生的困难,特别是当照片拍摄后物理世界发生重大变化时,以及基于浏览器的工具上的可视化功能,可以帮助用户解决这个问题。
{"title":"Images don't forget: Online photogrammetry to find lost graves","authors":"Abby Stylianou, Joseph D. O'Sullivan, Austin Abrams, Robert Pless","doi":"10.1109/AIPR.2014.7041939","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041939","url":null,"abstract":"The vast amount of public photographic data posted and shared on Facebook, Instragram and other forms of social media offers an unprecedented visual archive of the world. This archive captures events ranging from birthdays, trips, and graduations to lethal conflicts and human rights violations. Because this data is public, it has led to a new genre of journalism, one led by citizens finding, analyzing, and synthesizing data into stories that describe important events. To support this, we have built a set of browser-based tools for the calibration and validation of online images. This paper presents these tools in the context of their use in finding two separate lost burial locations. Often, these locations would have been marked with a headstone or tomb, but for the very poor, the forgotten, or the victims of extremist violence buried in unmarked graves, the geometric cues present in a photograph may contain the most reliable information about the burial location. The tools described in this paper allow individuals without any significant geometry background to utilize those cues to locate these lost graves, or any other outdoor image with sufficient correspondences to the physical world. We highlight the difficulties that arise due to geometric inconsistencies between corresponding points, especially when significant changes have occurred in the physical world since the photo was taken, and visualization features on our browser-based tools that help users address this.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132774023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Human re-identification in multi-camera systems 多摄像头系统中的人体再识别
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041916
Kevin Krucki, V. Asari, Christoph Borel-Donohue, David J. Bunker
We propose a human re-identification algorithm for multi-camera surveillance environment where a unique signature of an individual is learned and tracked in a scene. The video feed from each camera is processed using a motion detector to get locations of all individuals. To compute the human signature, we propose a combination of different descriptors on the detected body such as the Local Binary Pattern Histogram (LBPH) for the local texture and a HSV color-space based descriptor for the color representation. For each camera, a signature computed by these descriptors is assigned to the corresponding individual along with their direction in the scene. Knowledge of the persons direction allows us to make separate identifiers for the front, back, and sides. These signatures are then used to identify individuals as they walk across different areas monitored by different cameras. The challenges involved are the variation of illumination conditions and scale across the cameras. We test our algorithm on a dataset captured with 3 Axis cameras arranged in the UD Vision Lab as well as a subset of the SAIVT dataset and provide results which illustrate the consistency of the labels as well as precision/accuracy scores.
我们提出了一种用于多摄像头监控环境的人类再识别算法,其中在场景中学习和跟踪个人的唯一签名。来自每个摄像机的视频馈送使用运动检测器进行处理,以获得所有人的位置。为了计算人体签名,我们在被检测的身体上提出了不同描述符的组合,如局部纹理的局部二值模式直方图(LBPH)和颜色表示的基于HSV颜色空间的描述符。对于每个摄像机,由这些描述符计算的特征被分配给相应的个体以及他们在场景中的方向。对人的方向的了解使我们能够为前面、后面和侧面制作单独的标识符。然后,这些签名被用来识别走过不同区域的人,这些区域由不同的摄像头监控。所涉及的挑战是照明条件的变化和相机之间的比例。我们在一个数据集上测试了我们的算法,该数据集是由UD视觉实验室中的3轴相机捕获的,以及SAIVT数据集的一个子集,并提供了说明标签一致性以及精度/准确度分数的结果。
{"title":"Human re-identification in multi-camera systems","authors":"Kevin Krucki, V. Asari, Christoph Borel-Donohue, David J. Bunker","doi":"10.1109/AIPR.2014.7041916","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041916","url":null,"abstract":"We propose a human re-identification algorithm for multi-camera surveillance environment where a unique signature of an individual is learned and tracked in a scene. The video feed from each camera is processed using a motion detector to get locations of all individuals. To compute the human signature, we propose a combination of different descriptors on the detected body such as the Local Binary Pattern Histogram (LBPH) for the local texture and a HSV color-space based descriptor for the color representation. For each camera, a signature computed by these descriptors is assigned to the corresponding individual along with their direction in the scene. Knowledge of the persons direction allows us to make separate identifiers for the front, back, and sides. These signatures are then used to identify individuals as they walk across different areas monitored by different cameras. The challenges involved are the variation of illumination conditions and scale across the cameras. We test our algorithm on a dataset captured with 3 Axis cameras arranged in the UD Vision Lab as well as a subset of the SAIVT dataset and provide results which illustrate the consistency of the labels as well as precision/accuracy scores.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Fast orthorectified mosaics of thousands of aerial photographs from small UAVs 快速正校正马赛克数以千计的航空照片从小型无人机
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041928
M. D. Pritt
Small unmanned air vehicles (UAVs) provide an economical means of imaging large areas of terrain at far lower cost than satellites. Applications range from precision agriculture to disaster response and power line maintenance. Because small UAVs fly at low altitudes of approximately 100 meters, their cameras have only a limited field of view and must take thousands of photographs to cover a reasonably sized area. To provide a unified view of the area, these photographs must be combined into a seamless photo mosaic. The conventional approach for accomplishing this mosaicking process is called block bundle adjustment, and it works well if there are only a few tens or hundreds of photographs. It runs in O(n3) time, where n is the number of images. When there are thousands of photographs, this method fails because its memory and computational time requirements become prohibitively excessive. We have developed a new technique that replaces bundle adjustment with an iterative algorithm that is very fast and requires little memory. After pairwise image registration, the algorithm projects the resulting tie points to the ground and moves them closer to each other to produce a new set of control points. It fits the image parameters to these control points and repeats the process iteratively to convergence. The algorithm is implemented as an image mosaicking application in Java and runs on a Windows PC. It executes in O(n) time and produces very high resolution mosaics (2 cm per pixel) at the rate of 14 sec per image. This time includes all steps of the mosaicking process from the disk read of the imagery to the disk output of the final mosaic. Experiments show the algorithm to be accurate and reliable for mosaicking thousands of images.
小型无人驾驶飞行器(uav)提供了一种经济的手段,以远低于卫星的成本对大面积地形进行成像。应用范围从精准农业到灾害响应和电力线维护。由于小型无人机在大约100米的低空飞行,它们的相机只有有限的视野,必须拍摄数千张照片才能覆盖一个合理大小的区域。为了提供该地区的统一视图,这些照片必须组合成一个无缝的照片马赛克。完成这种拼接过程的传统方法被称为块束调整,如果只有几十或几百张照片,它就会很好地工作。它在O(n3)时间内运行,其中n是图像的数量。当有成千上万张照片时,这种方法失败了,因为它的内存和计算时间要求变得过高。我们已经开发了一种新的技术,用迭代算法代替束调整,这是非常快的,需要很少的内存。在图像成对配准后,算法将得到的结合点投影到地面上,并使它们彼此靠近,从而产生一组新的控制点。它将图像参数拟合到这些控制点上,并迭代重复该过程以收敛。该算法在Java语言中作为图像拼接应用程序实现,并在Windows PC上运行。它在O(n)时间内执行,并以每张图像14秒的速率生成非常高分辨率的马赛克(每像素2厘米)。这个时间包括拼接过程的所有步骤,从磁盘读取图像到最终拼接的磁盘输出。实验结果表明,该算法能够准确可靠地拼接数千幅图像。
{"title":"Fast orthorectified mosaics of thousands of aerial photographs from small UAVs","authors":"M. D. Pritt","doi":"10.1109/AIPR.2014.7041928","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041928","url":null,"abstract":"Small unmanned air vehicles (UAVs) provide an economical means of imaging large areas of terrain at far lower cost than satellites. Applications range from precision agriculture to disaster response and power line maintenance. Because small UAVs fly at low altitudes of approximately 100 meters, their cameras have only a limited field of view and must take thousands of photographs to cover a reasonably sized area. To provide a unified view of the area, these photographs must be combined into a seamless photo mosaic. The conventional approach for accomplishing this mosaicking process is called block bundle adjustment, and it works well if there are only a few tens or hundreds of photographs. It runs in O(n3) time, where n is the number of images. When there are thousands of photographs, this method fails because its memory and computational time requirements become prohibitively excessive. We have developed a new technique that replaces bundle adjustment with an iterative algorithm that is very fast and requires little memory. After pairwise image registration, the algorithm projects the resulting tie points to the ground and moves them closer to each other to produce a new set of control points. It fits the image parameters to these control points and repeats the process iteratively to convergence. The algorithm is implemented as an image mosaicking application in Java and runs on a Windows PC. It executes in O(n) time and produces very high resolution mosaics (2 cm per pixel) at the rate of 14 sec per image. This time includes all steps of the mosaicking process from the disk read of the imagery to the disk output of the final mosaic. Experiments show the algorithm to be accurate and reliable for mosaicking thousands of images.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129985854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Volumetrie features for object region classification in 3D LiDAR point clouds 三维激光雷达点云中目标区域分类的体积特征
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041941
Nina M. Varney, V. Asari
LiDAR data is a set of geo-spatially located points which contain (X, Y, Z) location and intensity data. This paper presents the extraction of a novel set of volume and texture-based features from segmented point clouds. First, the data is segmented into individual object regions using an automatic seeded region growing technique. Then, these object regions are normalized to a N × N × N voxel space, where each voxel contains information about the location and density of points within that voxel. A set of volumetric features are extracted to represent the object region; these features include: 3D form factor, rotation invariant local binary pattern (RILBP), fill, stretch, corrugation, contour, plainness and relative variance. The form factor, fill, and stretch provide a series of meaningful relationships between the volume, surface area, and shape of the object. RILBP provides a textural description from the height variation of the LiDAR data. The corrugation, contour, and plainness are extracted by 3D Eigen analysis of the object volume to describe the details of the object's surface. Relative variance provides an illustration of the distribution of points throughout the object. The new feature set is robust, and scale and rotation invariant for object region classification. The performance of the proposed feature extraction technique has been evaluated on a set of segmented and voxelized point cloud objects in a subset of the aerial LiDAR data from Surrey, British Columbia, which was available through the Open Data Program. The volumetric features, when used as an input to an SVM classifier, correctly classified the object regions with an accuracy of 97.5 %, with a focus on identifying five classes: ground, vegetation, buildings, vehicles, and barriers.
激光雷达数据是一组地理空间定位点,其中包含(X, Y, Z)位置和强度数据。本文提出了一种新的基于体积和纹理的点云特征提取方法。首先,使用自动种子区域生长技术将数据分割为单个目标区域。然后,将这些对象区域归一化为N × N × N体素空间,其中每个体素包含有关该体素内点的位置和密度的信息。提取一组体积特征来表示目标区域;这些特征包括:三维形状因子、旋转不变局部二值模式(RILBP)、填充、拉伸、波纹、轮廓、平面度和相对方差。形状因素、填充和拉伸在物体的体积、表面积和形状之间提供了一系列有意义的关系。RILBP从激光雷达数据的高度变化中提供纹理描述。通过对物体体积进行三维特征分析,提取物体表面的波纹、轮廓和平面,描述物体表面的细节。相对方差提供了整个对象中点分布的说明。该特征集鲁棒性好,且对目标区域分类具有尺度和旋转不变性。在一组来自不列颠哥伦比亚省萨里市的航空激光雷达数据子集的分割和体素化点云对象上,对所提出的特征提取技术的性能进行了评估,该数据可通过开放数据计划获得。当将体积特征用作支持向量机分类器的输入时,正确分类目标区域的准确率为97.5%,重点是识别五类:地面,植被,建筑物,车辆和障碍物。
{"title":"Volumetrie features for object region classification in 3D LiDAR point clouds","authors":"Nina M. Varney, V. Asari","doi":"10.1109/AIPR.2014.7041941","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041941","url":null,"abstract":"LiDAR data is a set of geo-spatially located points which contain (X, Y, Z) location and intensity data. This paper presents the extraction of a novel set of volume and texture-based features from segmented point clouds. First, the data is segmented into individual object regions using an automatic seeded region growing technique. Then, these object regions are normalized to a N × N × N voxel space, where each voxel contains information about the location and density of points within that voxel. A set of volumetric features are extracted to represent the object region; these features include: 3D form factor, rotation invariant local binary pattern (RILBP), fill, stretch, corrugation, contour, plainness and relative variance. The form factor, fill, and stretch provide a series of meaningful relationships between the volume, surface area, and shape of the object. RILBP provides a textural description from the height variation of the LiDAR data. The corrugation, contour, and plainness are extracted by 3D Eigen analysis of the object volume to describe the details of the object's surface. Relative variance provides an illustration of the distribution of points throughout the object. The new feature set is robust, and scale and rotation invariant for object region classification. The performance of the proposed feature extraction technique has been evaluated on a set of segmented and voxelized point cloud objects in a subset of the aerial LiDAR data from Surrey, British Columbia, which was available through the Open Data Program. The volumetric features, when used as an input to an SVM classifier, correctly classified the object regions with an accuracy of 97.5 %, with a focus on identifying five classes: ground, vegetation, buildings, vehicles, and barriers.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125390911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sparse generalized Fourier series via collocation-based optimization 基于配置优化的稀疏广义傅里叶级数
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041926
Ashley Prater
Generalized Fourier series with orthogonal polynomial bases have useful applications in several fields, including pattern recognition and image and signal processing. However, computing the generalized Fourier series can be a challenging problem, even for relatively well behaved functions. In this paper, a method for approximating a sparse collection of Fourier-like coefficients is presented that uses a collocation technique combined with an optimization problem inspired by recent results in compressed sensing research. The discussion includes approximation error rates and numerical examples to illustrate the effectiveness of the method. One example displays the accuracy of the generalized Fourier series approximation for several test functions, while the other is an application of the generalized Fourier series approximation to rotation-invariant pattern recognition in images.
基于正交多项式基的广义傅立叶级数在模式识别、图像和信号处理等领域有着广泛的应用。然而,计算广义傅里叶级数可能是一个具有挑战性的问题,即使是相对表现良好的函数。本文提出了一种近似类傅立叶系数稀疏集合的方法,该方法使用了一种搭配技术,并结合了受压缩感知研究最新成果启发的优化问题。讨论了近似误差率和数值实例,以说明该方法的有效性。一个例子显示了广义傅里叶级数近似对几个测试函数的准确性,而另一个例子是广义傅里叶级数近似在图像旋转不变模式识别中的应用。
{"title":"Sparse generalized Fourier series via collocation-based optimization","authors":"Ashley Prater","doi":"10.1109/AIPR.2014.7041926","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041926","url":null,"abstract":"Generalized Fourier series with orthogonal polynomial bases have useful applications in several fields, including pattern recognition and image and signal processing. However, computing the generalized Fourier series can be a challenging problem, even for relatively well behaved functions. In this paper, a method for approximating a sparse collection of Fourier-like coefficients is presented that uses a collocation technique combined with an optimization problem inspired by recent results in compressed sensing research. The discussion includes approximation error rates and numerical examples to illustrate the effectiveness of the method. One example displays the accuracy of the generalized Fourier series approximation for several test functions, while the other is an application of the generalized Fourier series approximation to rotation-invariant pattern recognition in images.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126543244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Entropy metric regularization for computational imaging with sensor arrays 传感器阵列计算成像的熵度量正则化
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041929
Prudhvi K. Gurram, R. Rao
Correlative interferometric image reconstruction is a computational imaging approach for synthesizing images from sensor arrays and relies on estimating source intensity from the cross-correlation across near-field or far-field measurements from multiple sensors of the arrays. Key to using the approach is the exploitation of relationship between the correlation and the source intensity. This relationship is of a Fourier transform type when the sensors are in the far-field of the source and the velocity of wave propagation in the intervening medium is constant. Often the estimation problem is ill-posed resulting in unrealistic reconstructions of images. Positivity constraints, boundary restrictions, ℓ1 regularization, and sparsity constrained optimization have been applied in previous work. This paper considers the noisy case and formulates the estimation problem as least squares minimization with entropy metrics, either minimum or maximum, as regularization terms. Situations involving far-field interferometric imaging of extended sources are considered and results illustrating the advantages of these entropy metrics and their applicability are provided.
相关干涉图像重建是一种用于从传感器阵列合成图像的计算成像方法,它依赖于从阵列的多个传感器的近场或远场测量的相互关系中估计源强度。使用该方法的关键是利用相关系数与源强度之间的关系。当传感器位于源的远场并且波在中间介质中的传播速度恒定时,这种关系属于傅里叶变换类型。通常估计问题是病态的,导致图像重建不现实。正性约束、边界约束、1正则化和稀疏性约束优化已在以往的工作中得到应用。本文考虑了有噪声的情况,将估计问题表述为最小二乘最小化,熵度量以最小或最大为正则化项。考虑了涉及扩展源远场干涉成像的情况,并给出了说明这些熵度量的优点及其适用性的结果。
{"title":"Entropy metric regularization for computational imaging with sensor arrays","authors":"Prudhvi K. Gurram, R. Rao","doi":"10.1109/AIPR.2014.7041929","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041929","url":null,"abstract":"Correlative interferometric image reconstruction is a computational imaging approach for synthesizing images from sensor arrays and relies on estimating source intensity from the cross-correlation across near-field or far-field measurements from multiple sensors of the arrays. Key to using the approach is the exploitation of relationship between the correlation and the source intensity. This relationship is of a Fourier transform type when the sensors are in the far-field of the source and the velocity of wave propagation in the intervening medium is constant. Often the estimation problem is ill-posed resulting in unrealistic reconstructions of images. Positivity constraints, boundary restrictions, ℓ1 regularization, and sparsity constrained optimization have been applied in previous work. This paper considers the noisy case and formulates the estimation problem as least squares minimization with entropy metrics, either minimum or maximum, as regularization terms. Situations involving far-field interferometric imaging of extended sources are considered and results illustrating the advantages of these entropy metrics and their applicability are provided.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133013665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imagery-based modeling of social, economic, and governance indicators in sub-Saharan Africa 撒哈拉以南非洲地区社会、经济和治理指标的基于图像的建模
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041911
J. Irvine, J. Kimball, J. Lepanto, J. Regan, Richard J. Wood
Many policy and national security challenges require understanding the social, cultural, and economic characteristics of a country or region. Addressing failing states, insurgencies, terrorist threats, societal change, and support for military operations require a detailed understanding of the local population. Information about the state of the economy, levels of community support and involvement, and attitudes toward government authorities can guide decision makers in developing and implementing policies or operations. However, such information is difficult to gather in remote, inaccessible, or denied areas. Draper's previous work demonstrating the application of remote sensing to specific issues, such as population estimation, agricultural analysis, and environmental monitoring, has been very promising. In recent papers, we extended these concepts to imagery-based prediction models for governance, well-being, and social capital. Social science theory indicates the relationships among physical structures, institutional features, and social structures. Based on these relationships, we developed models for rural Afghanistan and validated the relationships using survey data. In this paper we explore the adaptation of those models to sub-Saharan Africa. Our analysis indicates that, as in Afghanistan, certain attributes of the society are predictable from imagery-derived features. The automated extraction of relevant indicators, however, depends on both spatial and spectral information. Deriving useful measures from only panchromatic imagery poses some methodological challenges and additional research is needed.
许多政策和国家安全挑战需要了解一个国家或地区的社会、文化和经济特征。解决失败的国家、叛乱、恐怖主义威胁、社会变革和对军事行动的支持需要对当地人口的详细了解。有关经济状况、社区支持和参与程度以及对政府当局的态度的信息可以指导决策者制定和实施政策或行动。然而,这些信息很难在偏远、人迹罕至或被拒绝的地区收集。德雷珀之前的工作展示了遥感在具体问题上的应用,如人口估计、农业分析和环境监测,非常有前途。在最近的论文中,我们将这些概念扩展到治理、福祉和社会资本的基于图像的预测模型。社会科学理论指出了物理结构、制度特征和社会结构之间的关系。基于这些关系,我们开发了阿富汗农村的模型,并使用调查数据验证了这些关系。在本文中,我们探讨了这些模型在撒哈拉以南非洲的适应性。我们的分析表明,就像在阿富汗一样,社会的某些属性可以从图像衍生的特征中预测出来。然而,相关指标的自动提取依赖于空间和光谱信息。仅从全色图像中得出有用的测量方法提出了一些方法上的挑战,需要进一步的研究。
{"title":"Imagery-based modeling of social, economic, and governance indicators in sub-Saharan Africa","authors":"J. Irvine, J. Kimball, J. Lepanto, J. Regan, Richard J. Wood","doi":"10.1109/AIPR.2014.7041911","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041911","url":null,"abstract":"Many policy and national security challenges require understanding the social, cultural, and economic characteristics of a country or region. Addressing failing states, insurgencies, terrorist threats, societal change, and support for military operations require a detailed understanding of the local population. Information about the state of the economy, levels of community support and involvement, and attitudes toward government authorities can guide decision makers in developing and implementing policies or operations. However, such information is difficult to gather in remote, inaccessible, or denied areas. Draper's previous work demonstrating the application of remote sensing to specific issues, such as population estimation, agricultural analysis, and environmental monitoring, has been very promising. In recent papers, we extended these concepts to imagery-based prediction models for governance, well-being, and social capital. Social science theory indicates the relationships among physical structures, institutional features, and social structures. Based on these relationships, we developed models for rural Afghanistan and validated the relationships using survey data. In this paper we explore the adaptation of those models to sub-Saharan Africa. Our analysis indicates that, as in Afghanistan, certain attributes of the society are predictable from imagery-derived features. The automated extraction of relevant indicators, however, depends on both spatial and spectral information. Deriving useful measures from only panchromatic imagery poses some methodological challenges and additional research is needed.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129252148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Medical image segmentation using multi-scale and super-resolution method 基于多尺度和超分辨率的医学图像分割
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041899
En-Ui Lin, Michel McLaughlin, A. Alshehri
In many medical imaging applications, a clear delineation and segmentation of areas of interest from low resolution images is crucial. It is one of the most difficult and challenging tasks in image processing and directly determines the quality of final result of the image analysis. In preparation for segmentation, we first use preprocessing methods to remove noise and blur and then we use super-resolution to produce a high resolution image. Next, we will use wavelets to decompose the image into different sub-band images. In particular, we will use discrete wavelet transformation (DWT) and its enhanced version double density dual discrete tree wavelet transformations (D3-DWT) as they provide better spatial and spectral localization of image representation and have special importance to image processing applications, especially medical imaging. The multi-scale edge information from the sub-bands is then filtered through an iterative process to produce a map displaying extracted features and edges, which is then used to segment homogenous regions. We have applied our algorithm to challenging applications such as gray matter and white matter segmentations in Magnetic Resonance Imaging (MRI) images.
在许多医学成像应用中,从低分辨率图像中清晰地描绘和分割感兴趣的区域是至关重要的。它是图像处理中最困难和最具挑战性的任务之一,直接决定了图像分析最终结果的质量。在准备分割时,我们首先使用预处理方法去除噪声和模糊,然后使用超分辨率生成高分辨率图像。接下来,我们将使用小波将图像分解成不同的子带图像。特别是,我们将使用离散小波变换(DWT)及其增强版本双密度对偶离散树小波变换(D3-DWT),因为它们提供了更好的图像表示的空间和光谱定位,并且对图像处理应用,特别是医学成像具有特别重要的意义。然后通过迭代过程过滤来自子带的多尺度边缘信息,生成显示提取的特征和边缘的地图,然后用于分割同质区域。我们已经将我们的算法应用于具有挑战性的应用,如磁共振成像(MRI)图像中的灰质和白质分割。
{"title":"Medical image segmentation using multi-scale and super-resolution method","authors":"En-Ui Lin, Michel McLaughlin, A. Alshehri","doi":"10.1109/AIPR.2014.7041899","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041899","url":null,"abstract":"In many medical imaging applications, a clear delineation and segmentation of areas of interest from low resolution images is crucial. It is one of the most difficult and challenging tasks in image processing and directly determines the quality of final result of the image analysis. In preparation for segmentation, we first use preprocessing methods to remove noise and blur and then we use super-resolution to produce a high resolution image. Next, we will use wavelets to decompose the image into different sub-band images. In particular, we will use discrete wavelet transformation (DWT) and its enhanced version double density dual discrete tree wavelet transformations (D3-DWT) as they provide better spatial and spectral localization of image representation and have special importance to image processing applications, especially medical imaging. The multi-scale edge information from the sub-bands is then filtered through an iterative process to produce a map displaying extracted features and edges, which is then used to segment homogenous regions. We have applied our algorithm to challenging applications such as gray matter and white matter segmentations in Magnetic Resonance Imaging (MRI) images.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121098357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Representing pictures with sound 用声音表示图像
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041934
Edward M. Schaefer Unaffiliated
A coarse representation of pictures and images can be created with sound. A series of such audio sounds can be used to represent an animation or a motion picture. In this project, images are divided into a 4×4 array of "sound elements". The position of each sound element is assigned an audio sound, and the contents of each sound element is used to compute an audio intensity. The audio for each sound element is the audio sound for its position played at the computed audio intensity. The result of combining the audios for all sound elements is an audio representing the entire image. Algorithms for creating sounds and intensities will be described. Generating sounds for motion pictures using this technique will be discussed.
用声音可以粗略地表示图片和图像。一系列这样的声音可以用来表示动画或电影。在这个项目中,图像被分成4×4“声音元素”数组。每个声音元素的位置被分配一个音频声音,每个声音元素的内容被用来计算音频强度。每个声音元素的音频是在计算的音频强度下播放的其位置的音频。将所有声音元素的音频组合起来的结果是一个代表整个图像的音频。将描述用于创建声音和强度的算法。我们将讨论使用这种技术为电影生成声音。
{"title":"Representing pictures with sound","authors":"Edward M. Schaefer Unaffiliated","doi":"10.1109/AIPR.2014.7041934","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041934","url":null,"abstract":"A coarse representation of pictures and images can be created with sound. A series of such audio sounds can be used to represent an animation or a motion picture. In this project, images are divided into a 4×4 array of \"sound elements\". The position of each sound element is assigned an audio sound, and the contents of each sound element is used to compute an audio intensity. The audio for each sound element is the audio sound for its position played at the computed audio intensity. The result of combining the audios for all sound elements is an audio representing the entire image. Algorithms for creating sounds and intensities will be described. Generating sounds for motion pictures using this technique will be discussed.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117124270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1