首页 > 最新文献

2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

英文 中文
Multi-resolution deblurring 多分辨率由模糊变清晰
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041901
Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas
As technology advances; blur in an image remains as an ever-present issue in the image processing field. A blurred image is mathematically expressed as a convolution of a blur function with a sharp image, plus noise. Removing blur from an image has been widely researched and is still important as new images are collected. Without a reference image, identifying, measuring, and removing blur from a given image is very challenging. Deblurring involves estimating the blur kernel to match with various types of blur including camera motion/de focus or object motion. Various blur kernels have been studied over many years, but the most common function is the Gaussian. Once the blur kernel (function) is estimated, a deconvolution is performed with the kernel and the blurred image. Many existing methods operate in this manner, however, these methods remove blur from the blurred region, but alter the un-blurred regions of the image. Pixel alteration is due to the actual intensity values of the pixels in the image becoming easily distorted while being used in the deblurring process. The method proposed in this paper uses multi-resolution analysis (MRA) techniques to separate blur, edge, and noise coefficients. Deconvolution with the estimated blur kernel is then performed on these coefficients instead of the actual pixel intensity values before reconstructing the image. Additional steps will be taken to retain the quality of un-blurred regions of the blurred image. Experimental results on simulated and real data show that our approach achieves higher quality results than previous approaches on various blurry and noise images using several metrics including mutual information and structural similarity based metrics.
随着技术的进步;在图像处理领域,图像模糊一直是一个存在的问题。模糊图像在数学上表示为模糊函数与清晰图像加上噪声的卷积。从图像中去除模糊已经得到了广泛的研究,并且随着新图像的收集仍然很重要。如果没有参考图像,从给定图像中识别、测量和去除模糊是非常具有挑战性的。去模糊包括估计模糊核以匹配各种类型的模糊,包括相机运动/失焦或物体运动。各种模糊核已经研究了很多年,但最常见的函数是高斯函数。一旦估计了模糊核(函数),对核和模糊图像进行反卷积。现有的许多方法都是以这种方式操作的,但是,这些方法从模糊区域去除模糊,但改变了图像的未模糊区域。像素改变是由于图像中像素的实际强度值在用于去模糊过程时容易失真。本文提出的方法采用多分辨率分析(MRA)技术分离模糊系数、边缘系数和噪声系数。然后用估计的模糊核对这些系数进行反卷积,而不是在重建图像之前对实际的像素强度值进行反卷积。将采取额外的步骤来保持模糊图像的未模糊区域的质量。在模拟和真实数据上的实验结果表明,我们的方法使用互信息和基于结构相似度的度量在各种模糊和噪声图像上取得了比以前的方法更高的质量结果。
{"title":"Multi-resolution deblurring","authors":"Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas","doi":"10.1109/AIPR.2014.7041901","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041901","url":null,"abstract":"As technology advances; blur in an image remains as an ever-present issue in the image processing field. A blurred image is mathematically expressed as a convolution of a blur function with a sharp image, plus noise. Removing blur from an image has been widely researched and is still important as new images are collected. Without a reference image, identifying, measuring, and removing blur from a given image is very challenging. Deblurring involves estimating the blur kernel to match with various types of blur including camera motion/de focus or object motion. Various blur kernels have been studied over many years, but the most common function is the Gaussian. Once the blur kernel (function) is estimated, a deconvolution is performed with the kernel and the blurred image. Many existing methods operate in this manner, however, these methods remove blur from the blurred region, but alter the un-blurred regions of the image. Pixel alteration is due to the actual intensity values of the pixels in the image becoming easily distorted while being used in the deblurring process. The method proposed in this paper uses multi-resolution analysis (MRA) techniques to separate blur, edge, and noise coefficients. Deconvolution with the estimated blur kernel is then performed on these coefficients instead of the actual pixel intensity values before reconstructing the image. Additional steps will be taken to retain the quality of un-blurred regions of the blurred image. Experimental results on simulated and real data show that our approach achieves higher quality results than previous approaches on various blurry and noise images using several metrics including mutual information and structural similarity based metrics.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic segmentation of carcinoma in radiographs x线影像中癌的自动分割
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041904
Fatema A. Albalooshi, Sara Smith, Yakov Diskin, P. Sidike, V. Asari
A strong emphasis has been made on making the healthcare system and the diagnostic procedure more efficient. In this paper, we present an automatic detection technique designed to segment out abnormalities in X-ray imagery. Utilizing the proposed algorithm allows radiologists and their assistants to more effectively sort and analyze large amount of imagery. In radiology, X-ray beams are used to detect various densities within a tissue and to display accompanying anatomical and architectural distortion. Lesion localization within fibrous or dense tissue is complicated by a lack of clear visualization as compared to tissues with an increased fat distribution. As a result, carcinoma and its associated unique patterns can often be overlooked within dense tissue. We introduce a new segmentation technique that integrates prior knowledge, such as intensity level, color distribution, texture, gradient, and shape of the region of interest taken from prior data, within segmentation framework to enhance performance of region and boundary extraction of defected tissue regions in medical imagery. Prior knowledge of the intensity of the region of interest can be extremely helpful in guiding the segmentation process, especially when the carcinoma boundaries are not well defined and when the image contains non-homogeneous intensity variations. We evaluate our algorithm by comparing our detection results to the results of the manually segmented regions of interest. Through metrics, we also illustrate the effectiveness and accuracy of the algorithm in improving the diagnostic efficiency for medical experts.
重点是提高医疗保健系统和诊断程序的效率。在本文中,我们提出了一种自动检测技术,旨在分割出异常的x射线图像。利用提出的算法,放射科医生和他们的助手可以更有效地分类和分析大量的图像。在放射学中,x射线束用于检测组织内的各种密度,并显示伴随的解剖和结构畸变。与脂肪分布增加的组织相比,纤维或致密组织内的病变定位由于缺乏清晰的可视化而变得复杂。因此,在致密组织中,癌及其相关的独特模式常常被忽视。我们引入了一种新的分割技术,将先验数据中提取的感兴趣区域的强度、颜色分布、纹理、梯度和形状等先验知识整合到分割框架中,以提高医学图像中组织缺陷区域的区域和边界提取性能。对感兴趣区域强度的先验知识在指导分割过程中非常有帮助,特别是当癌边界不明确以及图像包含非均匀强度变化时。我们通过将检测结果与人工分割的感兴趣区域的结果进行比较来评估我们的算法。通过度量,我们还说明了该算法在提高医学专家诊断效率方面的有效性和准确性。
{"title":"Automatic segmentation of carcinoma in radiographs","authors":"Fatema A. Albalooshi, Sara Smith, Yakov Diskin, P. Sidike, V. Asari","doi":"10.1109/AIPR.2014.7041904","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041904","url":null,"abstract":"A strong emphasis has been made on making the healthcare system and the diagnostic procedure more efficient. In this paper, we present an automatic detection technique designed to segment out abnormalities in X-ray imagery. Utilizing the proposed algorithm allows radiologists and their assistants to more effectively sort and analyze large amount of imagery. In radiology, X-ray beams are used to detect various densities within a tissue and to display accompanying anatomical and architectural distortion. Lesion localization within fibrous or dense tissue is complicated by a lack of clear visualization as compared to tissues with an increased fat distribution. As a result, carcinoma and its associated unique patterns can often be overlooked within dense tissue. We introduce a new segmentation technique that integrates prior knowledge, such as intensity level, color distribution, texture, gradient, and shape of the region of interest taken from prior data, within segmentation framework to enhance performance of region and boundary extraction of defected tissue regions in medical imagery. Prior knowledge of the intensity of the region of interest can be extremely helpful in guiding the segmentation process, especially when the carcinoma boundaries are not well defined and when the image contains non-homogeneous intensity variations. We evaluate our algorithm by comparing our detection results to the results of the manually segmented regions of interest. Through metrics, we also illustrate the effectiveness and accuracy of the algorithm in improving the diagnostic efficiency for medical experts.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130557973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An automated workflow for observing track data in 3-dimensional geo-accurate environments 用于在三维地理精确环境中观察轨迹数据的自动化工作流程
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041895
D. Walvoord, Andrew C. Blose, B. Brower
Recent developments in computing capabilities and persistent surveillance systems have enabled advanced analytics and visualization of image data. Using our existing capabilities, this work focuses on developing a unified approach to address the task of visualizing track data in 3-dimensional environments. Our current structure from motion (SfM) workflow is reviewed to highlight our point cloud generation methodology, which offers the option to use available sensor telemetry to improve performance. To this point, an algorithm outline for navigation-guided feature matching and geo-rectification in the absence of ground control points (GCPs) is included in our discussion. We then provide a brief overview of our onboard processing suite, which includes real-time mosaic generation, image stabilization, and feature tracking. Exploitation of geometry refinements, inherent to the SfM workflow, is then discussed in the context of projecting track data into the point cloud environment for advanced visualization. Results using the new Exelis airborne collection system, Corvus Eye, are provided to discuss conclusions and areas for future work.
计算能力和持久监视系统的最新发展使图像数据的高级分析和可视化成为可能。利用我们现有的能力,这项工作的重点是开发一种统一的方法来解决在三维环境中可视化轨道数据的任务。回顾了我们目前的运动结构(SfM)工作流程,以突出我们的点云生成方法,该方法提供了使用可用传感器遥测来提高性能的选项。在这一点上,我们讨论了在没有地面控制点(gcp)的情况下导航制导特征匹配和地理校正的算法大纲。然后,我们提供了我们的板载处理套件的简要概述,其中包括实时马赛克生成,图像稳定和特征跟踪。然后,在将轨道数据投影到点云环境中以实现高级可视化的背景下,讨论了SfM工作流固有的几何改进的利用。利用新的Exelis机载采集系统Corvus Eye的结果,讨论了结论和未来工作的领域。
{"title":"An automated workflow for observing track data in 3-dimensional geo-accurate environments","authors":"D. Walvoord, Andrew C. Blose, B. Brower","doi":"10.1109/AIPR.2014.7041895","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041895","url":null,"abstract":"Recent developments in computing capabilities and persistent surveillance systems have enabled advanced analytics and visualization of image data. Using our existing capabilities, this work focuses on developing a unified approach to address the task of visualizing track data in 3-dimensional environments. Our current structure from motion (SfM) workflow is reviewed to highlight our point cloud generation methodology, which offers the option to use available sensor telemetry to improve performance. To this point, an algorithm outline for navigation-guided feature matching and geo-rectification in the absence of ground control points (GCPs) is included in our discussion. We then provide a brief overview of our onboard processing suite, which includes real-time mosaic generation, image stabilization, and feature tracking. Exploitation of geometry refinements, inherent to the SfM workflow, is then discussed in the context of projecting track data into the point cloud environment for advanced visualization. Results using the new Exelis airborne collection system, Corvus Eye, are provided to discuss conclusions and areas for future work.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D sparse point reconstructions of atmospheric nuclear detonations 大气核爆炸的三维稀疏点重建
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041938
Robert C. Slaughter, J. McClory, Daniel T. Schmitt, M. Sambora, K. Walli
Researchers at Lawrence Livermore National Laboratory (LLNL) have started digitizing technical films spanning the above ground atmospheric nuclear testing operations conducted by the United States from 1950 through the 1960s. This technical film test data represents unique information that can be use as a primary validation data source for nuclear effects codes that are used by national researchers for assessments on nuclear force management, nuclear detection and reporting, and nuclear forensics mission areas. Researchers at the Air Force Institute of Technology (AFIT) have begun employing modern digital image processing and computer vision techniques to exploit this data set and determine specific invariant features of the early dynamic fireball growth. The focus of this paper is to introduce the methodology used for three dimensional sparse reconstructions of nuclear fireballs. Also discussed are the the difficulties associated with the technique.
劳伦斯利弗莫尔国家实验室(LLNL)的研究人员已经开始将美国从1950年到1960年代进行的地面大气核试验操作的技术胶片数字化。这些技术胶片测试数据代表了独特的信息,可作为核效果代码的主要验证数据源,国家研究人员将这些代码用于核力量管理、核探测和报告以及核取证任务领域的评估。美国空军理工学院(AFIT)的研究人员已经开始采用现代数字图像处理和计算机视觉技术来利用这些数据集,并确定早期动态火球生长的特定不变特征。本文重点介绍了核火球三维稀疏重建的方法。还讨论了与该技术相关的困难。
{"title":"3D sparse point reconstructions of atmospheric nuclear detonations","authors":"Robert C. Slaughter, J. McClory, Daniel T. Schmitt, M. Sambora, K. Walli","doi":"10.1109/AIPR.2014.7041938","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041938","url":null,"abstract":"Researchers at Lawrence Livermore National Laboratory (LLNL) have started digitizing technical films spanning the above ground atmospheric nuclear testing operations conducted by the United States from 1950 through the 1960s. This technical film test data represents unique information that can be use as a primary validation data source for nuclear effects codes that are used by national researchers for assessments on nuclear force management, nuclear detection and reporting, and nuclear forensics mission areas. Researchers at the Air Force Institute of Technology (AFIT) have begun employing modern digital image processing and computer vision techniques to exploit this data set and determine specific invariant features of the early dynamic fireball growth. The focus of this paper is to introduce the methodology used for three dimensional sparse reconstructions of nuclear fireballs. Also discussed are the the difficulties associated with the technique.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114942679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Depth data assisted structure-from-motion parameter optimization and feature track correction 深度数据辅助结构-运动参数优化和特征轨迹校正
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041930
S. Recker, C. Gribble, Mikhail M. Shashkov, Mario Yepez, Mauricio Hess-Flores, K. Joy
Structure-from-Motion (SfM) applications attempt to reconstruct the three-dimensional (3D) geometry of an underlying scene from a collection of images, taken from various camera viewpoints. Traditional optimization techniques in SfM, which compute and refine camera poses and 3D structure, rely only on feature tracks, or sets of corresponding pixels, generated from color (RGB) images. With the abundance of reliable depth sensor information, these optimization procedures can be augmented to increase the accuracy of reconstruction. This paper presents a general cost function, which evaluates the quality of a reconstruction based upon a previously established angular cost function and depth data estimates. The cost function takes into account two error measures: first, the angular error between each computed 3D scene point and its corresponding feature track location, and second, the difference between the sensor depth value and its computed estimate. A bundle adjustment parameter optimization is implemented using the proposed cost function and evaluated for accuracy and performance. As opposed to traditional bundle adjustment, in the event of feature tracking errors, a corrective routine is also present to detect and correct inaccurate feature tracks. The filtering algorithm involves clustering depth estimates of the same scene point and observing the difference between the depth point estimates and the triangulated 3D point. Results on both real and synthetic data are presented and show that reconstruction accuracy is improved.
动态结构(SfM)应用程序试图从从不同摄像机视点拍摄的图像集合中重建底层场景的三维(3D)几何形状。SfM中的传统优化技术,即计算和细化相机姿势和3D结构,仅依赖于从彩色(RGB)图像生成的特征轨迹或相应像素集。随着可靠的深度传感器信息的丰富,这些优化程序可以增强,以提高重建的精度。本文提出了一个通用的成本函数,它基于先前建立的角成本函数和深度数据估计来评估重建的质量。成本函数考虑了两个误差度量:一是计算出的每个3D场景点与其对应的特征轨迹位置之间的角度误差,二是传感器深度值与其计算估计值之间的差异。利用提出的成本函数实现了一束调整参数优化,并对精度和性能进行了评估。与传统的包调整相反,在特征跟踪错误的情况下,还存在一个纠正例程来检测和纠正不准确的特征跟踪。滤波算法包括对相同场景点的深度估计进行聚类,并观察深度点估计与三角化三维点之间的差异。在真实数据和合成数据上的结果表明,该方法提高了重建精度。
{"title":"Depth data assisted structure-from-motion parameter optimization and feature track correction","authors":"S. Recker, C. Gribble, Mikhail M. Shashkov, Mario Yepez, Mauricio Hess-Flores, K. Joy","doi":"10.1109/AIPR.2014.7041930","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041930","url":null,"abstract":"Structure-from-Motion (SfM) applications attempt to reconstruct the three-dimensional (3D) geometry of an underlying scene from a collection of images, taken from various camera viewpoints. Traditional optimization techniques in SfM, which compute and refine camera poses and 3D structure, rely only on feature tracks, or sets of corresponding pixels, generated from color (RGB) images. With the abundance of reliable depth sensor information, these optimization procedures can be augmented to increase the accuracy of reconstruction. This paper presents a general cost function, which evaluates the quality of a reconstruction based upon a previously established angular cost function and depth data estimates. The cost function takes into account two error measures: first, the angular error between each computed 3D scene point and its corresponding feature track location, and second, the difference between the sensor depth value and its computed estimate. A bundle adjustment parameter optimization is implemented using the proposed cost function and evaluated for accuracy and performance. As opposed to traditional bundle adjustment, in the event of feature tracking errors, a corrective routine is also present to detect and correct inaccurate feature tracks. The filtering algorithm involves clustering depth estimates of the same scene point and observing the difference between the depth point estimates and the triangulated 3D point. Results on both real and synthetic data are presented and show that reconstruction accuracy is improved.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129397593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mobile ISR: Intelligent ISR management and exploitation for the expeditionary warfighter 移动ISR:面向远征作战人员的智能ISR管理和开发
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041918
Donald Madden, T. Choe, Hongli Deng, Kiran Gunda, H. Gupta, N. Ramanathan, Z. Rasheed, E. Shayne, Asaad Hakeem
Modern warfighters are informed by an expanding variety of Intelligence, Surveillance and Reconnaissance (ISR) sources, but the timely exploitation of this data poses a significant challenge. ObjectVideo ("OV") presents a system, Mobile ISR to facilitate ISR knowledge discovery for expeditionary warfighters. The aim is to collect, manage, and deliver time-critical information when and where it is needed most. The Mobile ISR system consumes video, still imagery, and target metadata from airborne, ground-based, and hand-held sensors, and indexes that data based on content using state-of-the-art video analytics and user tagging. The data is stored in a geospatial database and disseminated to warfighters according to their mission context and current activity. The warfighters use an Android mobile application to view this data in the context of an interactive map or augmented reality display, and to capture their own imagery and video. A complex event processing engine enables powerful queries to the knowledge base. The system leverages the extended DoD Discovery Metadata Specification (DDMS) card format, with extensions to include representation of entities, activities, and relationships.
现代作战人员通过不断扩大的各种情报、监视和侦察(ISR)来源获得信息,但及时利用这些数据构成了重大挑战。ObjectVideo(“OV”)提出了一个系统,移动ISR,以促进远征作战人员的ISR知识发现。其目的是在最需要的时间和地点收集、管理和交付时间关键型信息。移动ISR系统接收来自机载、地面和手持传感器的视频、静止图像和目标元数据,并使用最先进的视频分析和用户标记对数据内容进行索引。数据存储在地理空间数据库中,并根据作战人员的任务背景和当前活动分发给作战人员。作战人员使用Android移动应用程序在交互式地图或增强现实显示的背景下查看这些数据,并捕获他们自己的图像和视频。复杂的事件处理引擎支持对知识库的强大查询。该系统利用扩展的DoD发现元数据规范(DDMS)卡格式,通过扩展包括实体、活动和关系的表示。
{"title":"Mobile ISR: Intelligent ISR management and exploitation for the expeditionary warfighter","authors":"Donald Madden, T. Choe, Hongli Deng, Kiran Gunda, H. Gupta, N. Ramanathan, Z. Rasheed, E. Shayne, Asaad Hakeem","doi":"10.1109/AIPR.2014.7041918","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041918","url":null,"abstract":"Modern warfighters are informed by an expanding variety of Intelligence, Surveillance and Reconnaissance (ISR) sources, but the timely exploitation of this data poses a significant challenge. ObjectVideo (\"OV\") presents a system, Mobile ISR to facilitate ISR knowledge discovery for expeditionary warfighters. The aim is to collect, manage, and deliver time-critical information when and where it is needed most. The Mobile ISR system consumes video, still imagery, and target metadata from airborne, ground-based, and hand-held sensors, and indexes that data based on content using state-of-the-art video analytics and user tagging. The data is stored in a geospatial database and disseminated to warfighters according to their mission context and current activity. The warfighters use an Android mobile application to view this data in the context of an interactive map or augmented reality display, and to capture their own imagery and video. A complex event processing engine enables powerful queries to the knowledge base. The system leverages the extended DoD Discovery Metadata Specification (DDMS) card format, with extensions to include representation of entities, activities, and relationships.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131251533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking 一种基于容器的实时全动态视频(FMV)目标跟踪弹性云架构
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041896
Ryan Wu, Yu Chen, Erik Blasch, Bingwei Liu, Genshe Chen, Dan Shen
Full-motion video (FMV) target tracking requires the objects of interest be detected in a continuous video stream. Maintaining a stable track can be challenging as target attributes change over time, frame-rates can vary, and image alignment errors may drift. As such, optimizing FMV target tracking performance to address dynamic scenarios is critical. Many target tracking algorithms do not take advantage of parallelism due to dependencies on previous estimates which results in idle computation resources when waiting for such dependencies to resolve. To address this problem, a container-based virtualization technology is adopted to make more efficient use of computing resources for achieving an elastic information fusion cloud. In this paper, we leverage the benefits provided by container-based virtualization to optimize an FMV target tracking application. Using OpenVZ as the virtualization platform, we parallelize video processing by distributing incoming frames across multiple containers. A concurrent container partitions video stream into frames and then resembles processed frames into video output. We implement a system that dynamically allocates VE computing resources to match frame production and consumption between VEs. The experimental results verify the viability of container-based virtualization for improving FMV target tracking performance and demostrates a solution for mission-critical information fusion tasks.
全动态视频(FMV)目标跟踪要求在连续视频流中检测到感兴趣的目标。由于目标属性随时间变化,帧率可能变化,并且图像对齐错误可能漂移,因此保持稳定的跟踪可能具有挑战性。因此,优化FMV目标跟踪性能以应对动态场景至关重要。由于依赖于先前的估计,许多目标跟踪算法没有利用并行性,这导致在等待这些依赖项解决时产生空闲的计算资源。针对这一问题,采用基于容器的虚拟化技术,更有效地利用计算资源,实现弹性信息融合云。在本文中,我们利用基于容器的虚拟化提供的优势来优化FMV目标跟踪应用程序。使用OpenVZ作为虚拟化平台,我们通过在多个容器中分发传入帧来并行处理视频。并发容器将视频流划分为帧,然后将处理过的帧类似于视频输出。我们实现了一个动态分配VE计算资源的系统,以匹配VE之间的帧生产和消耗。实验结果验证了基于容器的虚拟化技术提高FMV目标跟踪性能的可行性,并为关键任务信息融合任务提供了一种解决方案。
{"title":"A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking","authors":"Ryan Wu, Yu Chen, Erik Blasch, Bingwei Liu, Genshe Chen, Dan Shen","doi":"10.1109/AIPR.2014.7041896","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041896","url":null,"abstract":"Full-motion video (FMV) target tracking requires the objects of interest be detected in a continuous video stream. Maintaining a stable track can be challenging as target attributes change over time, frame-rates can vary, and image alignment errors may drift. As such, optimizing FMV target tracking performance to address dynamic scenarios is critical. Many target tracking algorithms do not take advantage of parallelism due to dependencies on previous estimates which results in idle computation resources when waiting for such dependencies to resolve. To address this problem, a container-based virtualization technology is adopted to make more efficient use of computing resources for achieving an elastic information fusion cloud. In this paper, we leverage the benefits provided by container-based virtualization to optimize an FMV target tracking application. Using OpenVZ as the virtualization platform, we parallelize video processing by distributing incoming frames across multiple containers. A concurrent container partitions video stream into frames and then resembles processed frames into video output. We implement a system that dynamically allocates VE computing resources to match frame production and consumption between VEs. The experimental results verify the viability of container-based virtualization for improving FMV target tracking performance and demostrates a solution for mission-critical information fusion tasks.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Extension of no-reference deblurring methods through image fusion 通过图像融合扩展无参考去模糊方法
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041905
M. Ferris, Erik P. Blasen, Michel McLaughlin
Extracting an optimal amount of information from a blurred image without a reference image for comparison is an pressing issue in image quality enhancement. Most studies have approached deblurring by using iterative algorithms in an attempt to deconvolve the blurred image into the ideal image. Deconvolution is difficult due to the need to estimate a point spread function for the blur after each iteration, which can be computationally expensive for many iterations which often causes some amount of distortion or "ringing" in the deblurred image. However, image fusion may provide a solution. By deblurring a no-reference image, then fusing it with the blurred image, it is possible to extract additional salient information from the fused image; however the deblurring process causes some degree of information loss. The act of fixing one section of the image can cause distortion in another section of the image. Hence, by fusing the blurred and deblurred images together, it is critical to retain important information from the blurred image and reduce the "ringing" in the deblurred image. To evaluate the fusion process, three different evaluation metrics are used: Mutual Information (MI), Mean Square Error (MSE), and Peak Signal to Noise Ratio (PSNR). This paper details an extension of the no-reference image deblurring process and the initial results indicate that image fusion has the potential to be a useful tool in recovering information in a blurred image.
在没有参考图像的情况下,从模糊图像中提取最优信息量是图像质量增强中的一个紧迫问题。大多数研究都是通过使用迭代算法来尝试将模糊图像反卷积到理想图像中。反卷积是困难的,因为每次迭代后需要估计模糊的点扩展函数,这对于许多迭代来说可能是计算昂贵的,这通常会在去模糊的图像中导致一定程度的失真或“响”。然而,图像融合可能提供一个解决方案。通过去模糊无参考图像,然后将其与模糊图像融合,可以从融合图像中提取额外的显著信息;然而,去模糊过程会造成一定程度的信息丢失。固定图像的一部分的行为会导致图像的另一部分失真。因此,通过将模糊图像和去模糊图像融合在一起,保留模糊图像中的重要信息并减少去模糊图像中的“振铃”是至关重要的。为了评估融合过程,使用了三种不同的评估指标:互信息(MI)、均方误差(MSE)和峰值信噪比(PSNR)。本文详细介绍了无参考图像去模糊过程的扩展,初步结果表明,图像融合有可能成为恢复模糊图像中信息的有用工具。
{"title":"Extension of no-reference deblurring methods through image fusion","authors":"M. Ferris, Erik P. Blasen, Michel McLaughlin","doi":"10.1109/AIPR.2014.7041905","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041905","url":null,"abstract":"Extracting an optimal amount of information from a blurred image without a reference image for comparison is an pressing issue in image quality enhancement. Most studies have approached deblurring by using iterative algorithms in an attempt to deconvolve the blurred image into the ideal image. Deconvolution is difficult due to the need to estimate a point spread function for the blur after each iteration, which can be computationally expensive for many iterations which often causes some amount of distortion or \"ringing\" in the deblurred image. However, image fusion may provide a solution. By deblurring a no-reference image, then fusing it with the blurred image, it is possible to extract additional salient information from the fused image; however the deblurring process causes some degree of information loss. The act of fixing one section of the image can cause distortion in another section of the image. Hence, by fusing the blurred and deblurred images together, it is critical to retain important information from the blurred image and reduce the \"ringing\" in the deblurred image. To evaluate the fusion process, three different evaluation metrics are used: Mutual Information (MI), Mean Square Error (MSE), and Peak Signal to Noise Ratio (PSNR). This paper details an extension of the no-reference image deblurring process and the initial results indicate that image fusion has the potential to be a useful tool in recovering information in a blurred image.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114451796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Novel geometric coordination registration in cone-beam computed Tomogram 锥束计算机层析成像中的新型几何配位
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041922
W. Y. Lam, Henry Y. T. Ngan, P. Wat, H. Luk, E. Pow, T. Goto
The use of cone-beam computed tomography (CBCT) in medical field can help the clinicians to visualize the hard tissues in head and neck region via a cylindrical field of view (FOV). The images are usually presented with reconstructed three-dimensional (3D) imaging and its orthogonal (x-, y- and z-planes) images. Spatial relationship of the structures in these orthogonal views is important for diagnosis of diseases as well as planning for treatment. However, the non-standardized positioning of the object during the CBCT data acquisition often induces errors in measurement since orthogonal images cut at different planes might look similar. In order to solve the problem, this paper proposes an effective mapping from the Cartesian coordinates of a cube physically to its respective coordinates in 3D imaging. Therefore, the object (real physical domain) and the imaging (computerized virtual domain) can be linked up and registered. In this way, the geometric coordination of the object/imaging can be defined and its orthogonal images would be fixed on defined planes. The images can then be measured with vector information and serial imagings can also be directly compared.
锥形束计算机断层扫描(CBCT)在医学领域的应用可以帮助临床医生通过柱面视场(FOV)对头颈部的硬组织进行可视化。图像通常以重建的三维(3D)成像及其正交(x, y和z平面)图像呈现。这些正交视图中结构的空间关系对疾病的诊断和治疗计划很重要。然而,在CBCT数据采集过程中,目标的非标准化定位往往会导致测量误差,因为在不同平面上切割的正交图像可能看起来很相似。为了解决这一问题,本文提出了一种将立方体物理上的笛卡尔坐标映射到三维成像中立方体各自的坐标的有效方法。因此,对象(真实物理域)和成像(计算机虚拟域)可以连接和注册。这样,就可以定义物体/成像的几何坐标,并将其正交图像固定在确定的平面上。然后可以用矢量信息测量图像,也可以直接比较串行图像。
{"title":"Novel geometric coordination registration in cone-beam computed Tomogram","authors":"W. Y. Lam, Henry Y. T. Ngan, P. Wat, H. Luk, E. Pow, T. Goto","doi":"10.1109/AIPR.2014.7041922","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041922","url":null,"abstract":"The use of cone-beam computed tomography (CBCT) in medical field can help the clinicians to visualize the hard tissues in head and neck region via a cylindrical field of view (FOV). The images are usually presented with reconstructed three-dimensional (3D) imaging and its orthogonal (x-, y- and z-planes) images. Spatial relationship of the structures in these orthogonal views is important for diagnosis of diseases as well as planning for treatment. However, the non-standardized positioning of the object during the CBCT data acquisition often induces errors in measurement since orthogonal images cut at different planes might look similar. In order to solve the problem, this paper proposes an effective mapping from the Cartesian coordinates of a cube physically to its respective coordinates in 3D imaging. Therefore, the object (real physical domain) and the imaging (computerized virtual domain) can be linked up and registered. In this way, the geometric coordination of the object/imaging can be defined and its orthogonal images would be fixed on defined planes. The images can then be measured with vector information and serial imagings can also be directly compared.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117333543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Range invariant anomaly detection for LWIR polarimetric imagery LWIR偏振图像距离不变异常检测
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041931
J. Romano, D. Rosario
In this paper we present a modified version of a previously proposed anomaly detector for polarimetric imagery. This modified version is a more adaptive, range invariant anomaly detector based on the covariance difference test, the M-Box. The paper demonstrates the underlying issue of range to target dependency of the previous algorithm and offers a solution that is very easily implemented with the M-Box covariance test. Results are shown where the new algorithm is capable of identifying manmade objects as anomalies in both close and long range scenarios.
在本文中,我们提出了一个修改版本的以前提出的异常探测器的偏振图像。这个改进的版本是一个更自适应的,距离不变的异常检测器基于协方差差异检验,M-Box。本文论证了先前算法的距离与目标依赖的潜在问题,并提供了一个非常容易实现的M-Box协方差检验的解决方案。结果显示,新算法能够在近距离和远程场景中将人造物体识别为异常。
{"title":"Range invariant anomaly detection for LWIR polarimetric imagery","authors":"J. Romano, D. Rosario","doi":"10.1109/AIPR.2014.7041931","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041931","url":null,"abstract":"In this paper we present a modified version of a previously proposed anomaly detector for polarimetric imagery. This modified version is a more adaptive, range invariant anomaly detector based on the covariance difference test, the M-Box. The paper demonstrates the underlying issue of range to target dependency of the previous algorithm and offers a solution that is very easily implemented with the M-Box covariance test. Results are shown where the new algorithm is capable of identifying manmade objects as anomalies in both close and long range scenarios.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1