首页 > 最新文献

2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

英文 中文
Gender and age recognition for video analytics solution 性别和年龄识别视频分析解决方案
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041914
V. Khryashchev, A. Priorov, A. Ganin
An application for video data analysis based on computer vision and machine learning methods is presented. Novel gender and age classifiers based on adaptive features, local binary patterns and support vector machines are proposed. More than 94% accuracy of viewer's gender recognition is achieved. Our age estimation algorithm provides world-quality results for MORTH database, but focused on real-life audience measurement videodata in which faces can be looks more or less similar to RUS-FD private database. In this case we can reach total mean absolute error score less than 7. All the video processing stages are united into a real-time system of audience analysis. The system allows to extract all the possible information about people from the input video stream, to aggregate and analyze this information in order to measure different statistical parameters. The promising practical application of such algorithms can be human-computer interaction, surveillance monitoring, video content analysis, targeted advertising, biometrics, and entertainment.
提出了一种基于计算机视觉和机器学习方法的视频数据分析应用。提出了基于自适应特征、局部二值模式和支持向量机的性别和年龄分类器。对观众的性别识别准确率达到94%以上。我们的年龄估计算法为north数据库提供了世界质量的结果,但侧重于现实生活中的观众测量视频数据,其中面部看起来或多或少与RUS-FD私有数据库相似。在这种情况下,我们可以达到总平均绝对误差得分小于7。所有的视频处理阶段都统一成一个实时的观众分析系统。该系统允许从输入的视频流中提取所有可能的人物信息,并对这些信息进行汇总和分析,以测量不同的统计参数。这种算法有希望的实际应用可以是人机交互、监视监控、视频内容分析、目标广告、生物识别和娱乐。
{"title":"Gender and age recognition for video analytics solution","authors":"V. Khryashchev, A. Priorov, A. Ganin","doi":"10.1109/AIPR.2014.7041914","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041914","url":null,"abstract":"An application for video data analysis based on computer vision and machine learning methods is presented. Novel gender and age classifiers based on adaptive features, local binary patterns and support vector machines are proposed. More than 94% accuracy of viewer's gender recognition is achieved. Our age estimation algorithm provides world-quality results for MORTH database, but focused on real-life audience measurement videodata in which faces can be looks more or less similar to RUS-FD private database. In this case we can reach total mean absolute error score less than 7. All the video processing stages are united into a real-time system of audience analysis. The system allows to extract all the possible information about people from the input video stream, to aggregate and analyze this information in order to measure different statistical parameters. The promising practical application of such algorithms can be human-computer interaction, surveillance monitoring, video content analysis, targeted advertising, biometrics, and entertainment.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132224939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Democratizing the visualization of 500 million webcam images 民主化可视化5亿网络摄像头图像
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041925
Joseph D. O'Sullivan, Abby Stylianou, Austin Abrams, Robert Pless
Five years ago we reported at AIPR on a nascent project to archive images from every webcam in the world and to develop algorithms to geo-locate, calibrate, and annotate this data. This archive of many outdoor scenes (AMOS) has now grown to include 28000 live outdoor cameras and over 630 million images. This is actively being used in projects ranging from large scale environmental monitoring to characterizing how built environment changes (such as adding bike lanes in DC) affects physical activity patterns over time. But the biggest value in a very long term, widely distributed image dataset is the rich set of before data that can be analyzed to evaluate changes from unexpected or sudden events. To facilitate the analysis of these natural experiments, we build and share a collection of web-tools that support large scale, data driven exploration. In this work we discuss and motivate a visualization tool that uses PCA to find the subspace that characterizes the variations in this scene, This anomaly detection captures both imaging failures such as lens flare and also unusual situations such as street fairs, and we give initial algorithm to clusters anomalies so that they can be quickly evaluated for whether they are of interest.
五年前,我们在AIPR上报道了一个新生的项目,该项目将世界上每个网络摄像头的图像存档,并开发算法来定位、校准和注释这些数据。这个户外场景档案(AMOS)现在已经发展到包括28000个现场户外摄像机和超过6.3亿张图像。从大规模环境监测到描述建筑环境变化(如在华盛顿增加自行车道)如何随着时间的推移影响身体活动模式,这些项目都在积极地使用这种方法。但是,在一个非常长期的、广泛分布的图像数据集中,最大的价值是可以分析的丰富的前数据集,以评估意外或突然事件的变化。为了便于对这些自然实验进行分析,我们建立并共享了一系列支持大规模数据驱动探索的网络工具。在这项工作中,我们讨论并激发了一种可视化工具,该工具使用PCA来寻找表征该场景变化的子空间,这种异常检测捕获了成像故障(如镜头光晕)和不寻常情况(如街头集市),并且我们给出了初始算法来聚类异常,以便可以快速评估它们是否感兴趣。
{"title":"Democratizing the visualization of 500 million webcam images","authors":"Joseph D. O'Sullivan, Abby Stylianou, Austin Abrams, Robert Pless","doi":"10.1109/AIPR.2014.7041925","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041925","url":null,"abstract":"Five years ago we reported at AIPR on a nascent project to archive images from every webcam in the world and to develop algorithms to geo-locate, calibrate, and annotate this data. This archive of many outdoor scenes (AMOS) has now grown to include 28000 live outdoor cameras and over 630 million images. This is actively being used in projects ranging from large scale environmental monitoring to characterizing how built environment changes (such as adding bike lanes in DC) affects physical activity patterns over time. But the biggest value in a very long term, widely distributed image dataset is the rich set of before data that can be analyzed to evaluate changes from unexpected or sudden events. To facilitate the analysis of these natural experiments, we build and share a collection of web-tools that support large scale, data driven exploration. In this work we discuss and motivate a visualization tool that uses PCA to find the subspace that characterizes the variations in this scene, This anomaly detection captures both imaging failures such as lens flare and also unusual situations such as street fairs, and we give initial algorithm to clusters anomalies so that they can be quickly evaluated for whether they are of interest.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132556171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Road sign detection on a smartphone for traffic safety 智能手机上的道路标志检测,交通安全
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041927
Carrie Pritt
The goal of this work is the development of a low-cost driver assistance system that runs on an ordinary smartphone. It uses computer vision techniques and multiple-resolution template matching to detect speed limit signs and alert the driver if the speed limit is exceeded. It inputs an image of the sign to be detected and creates a set of multiple-resolution templates. It also inputs photographs of the road from the smartphone camera at regular intervals and generates multiple-resolution images from the photographs. In the first step of processing, fast filters restrict the focus of attention to smaller areas of the photographs where signs are likely to be present. In the second step, the system matches the templates against the photographs using fast normalized cross correlation to detect speed limit signs. The multiple resolutions enable this approach to detect signs at different scales. In the third step, the system recognizes the sign by matching a series of annotated speed templates to the image at the position and scale that were determined by the detection step. It compares the speed limit with the actual vehicle speed as computed from the smartphone GPS device and issues warnings to the driver as necessary. The system is implemented as an Android application that runs on an ordinary smartphone as part of a client-server architecture. It processes photos at a rate of 1 Hz with a probability of detection of 0.93 at the 95% confidence level and a false alarm rate of 0.0007, or one false classification every 25 min.
这项工作的目标是开发一种运行在普通智能手机上的低成本驾驶员辅助系统。它使用计算机视觉技术和多分辨率模板匹配来检测限速标志,并在超过限速时提醒驾驶员。它输入要检测的标志的图像并创建一组多分辨率模板。它还定期输入智能手机摄像头拍摄的道路照片,并根据照片生成多分辨率图像。在处理的第一步,快速过滤器将注意力限制在照片中可能存在迹象的较小区域。在第二步中,系统使用快速归一化互相关将模板与照片进行匹配以检测限速标志。多种分辨率使这种方法能够检测不同尺度的标志。在第三步中,系统通过将一系列带注释的速度模板与检测步骤确定的位置和比例的图像匹配来识别标志。它会将车速限制与从智能手机GPS设备计算出的实际车速进行比较,并在必要时向驾驶员发出警告。该系统作为一个Android应用程序实现,作为客户机-服务器架构的一部分运行在普通智能手机上。它以1 Hz的速率处理照片,在95%的置信度下检测概率为0.93,误报率为0.0007,或每25分钟进行一次错误分类。
{"title":"Road sign detection on a smartphone for traffic safety","authors":"Carrie Pritt","doi":"10.1109/AIPR.2014.7041927","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041927","url":null,"abstract":"The goal of this work is the development of a low-cost driver assistance system that runs on an ordinary smartphone. It uses computer vision techniques and multiple-resolution template matching to detect speed limit signs and alert the driver if the speed limit is exceeded. It inputs an image of the sign to be detected and creates a set of multiple-resolution templates. It also inputs photographs of the road from the smartphone camera at regular intervals and generates multiple-resolution images from the photographs. In the first step of processing, fast filters restrict the focus of attention to smaller areas of the photographs where signs are likely to be present. In the second step, the system matches the templates against the photographs using fast normalized cross correlation to detect speed limit signs. The multiple resolutions enable this approach to detect signs at different scales. In the third step, the system recognizes the sign by matching a series of annotated speed templates to the image at the position and scale that were determined by the detection step. It compares the speed limit with the actual vehicle speed as computed from the smartphone GPS device and issues warnings to the driver as necessary. The system is implemented as an Android application that runs on an ordinary smartphone as part of a client-server architecture. It processes photos at a rate of 1 Hz with a probability of detection of 0.93 at the 95% confidence level and a false alarm rate of 0.0007, or one false classification every 25 min.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123302992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Analysis of diurnal, long-wave hyperspectral measurements of natural background and manmade targets under different weather conditions 不同天气条件下自然背景和人造目标的日、长波高光谱测量分析
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041903
Christoph Borel-Donohue, D. Rosario, J. Romano
In this paper we describe the end-to-end processing of image Fourier Transform spectrometry data taken at Picatinny Arsenal in New Jersey with the long-wave hyperspectral camera from Telops. The first part of the paper discusses the processing from raw data to calibrated radiance and emissivity data. Data was taken during several months under different weather conditions every 6 minutes from a 213ft high tower of surrogate tank targets for a project sponsored by the Army Research Laboratory in Adelphi, MD. An automatic calibration and analysis program was developed which creates calibrated data files and HTML files. The first processing stage is a flat-fielding. During this step the mean base line is used to find dead pixels (baseline low or at the maximum). Noisy pixels are detected where the standard deviation over the part of the interferogram. A flat-fielded and bad pixel corrected calibration cube using the gain and offset determined by a single blackbody measurement is created. In the second stage each flat-fielded cube is Fourier transformed and a 2-point radiometric calibration is performed. For selected cubes a temperature-emissivity separation algorithm is applied. The second part discusses environmental effects such as diurnal and seasonal atmospheric and temperature changes and the effect of cloud cover on the data. To test the effect of environmental conditions the range-invariant anomaly detection approach is applied to calibrated radiance, brightness temperature and emissivity data.
本文描述了用Telops公司的长波高光谱相机对新泽西州Picatinny兵工厂采集的图像傅里叶变换光谱数据进行端到端处理。本文的第一部分讨论了从原始数据到标定的辐射率和发射率数据的处理。在几个月的时间里,在不同的天气条件下,每隔6分钟从213英尺高的替代坦克目标塔上采集数据,这是由马里兰州陆军研究实验室赞助的一个项目。开发了一个自动校准和分析程序,可以创建校准数据文件和HTML文件。第一个处理阶段是平场。在此步骤中,使用平均基线来查找死像素(基线低或最大值)。在干涉图部分的标准偏差处检测到噪声像素。使用由单个黑体测量确定的增益和偏移量,创建了一个平场和坏像素校正校准立方体。在第二阶段,每个平场立方体进行傅里叶变换,并进行两点辐射校准。对于选定的立方体,采用温度发射率分离算法。第二部分讨论了环境影响,如日和季节的大气和温度变化以及云量对数据的影响。为了测试环境条件的影响,将距离不变异常检测方法应用于校准的辐射度、亮度温度和发射率数据。
{"title":"Analysis of diurnal, long-wave hyperspectral measurements of natural background and manmade targets under different weather conditions","authors":"Christoph Borel-Donohue, D. Rosario, J. Romano","doi":"10.1109/AIPR.2014.7041903","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041903","url":null,"abstract":"In this paper we describe the end-to-end processing of image Fourier Transform spectrometry data taken at Picatinny Arsenal in New Jersey with the long-wave hyperspectral camera from Telops. The first part of the paper discusses the processing from raw data to calibrated radiance and emissivity data. Data was taken during several months under different weather conditions every 6 minutes from a 213ft high tower of surrogate tank targets for a project sponsored by the Army Research Laboratory in Adelphi, MD. An automatic calibration and analysis program was developed which creates calibrated data files and HTML files. The first processing stage is a flat-fielding. During this step the mean base line is used to find dead pixels (baseline low or at the maximum). Noisy pixels are detected where the standard deviation over the part of the interferogram. A flat-fielded and bad pixel corrected calibration cube using the gain and offset determined by a single blackbody measurement is created. In the second stage each flat-fielded cube is Fourier transformed and a 2-point radiometric calibration is performed. For selected cubes a temperature-emissivity separation algorithm is applied. The second part discusses environmental effects such as diurnal and seasonal atmospheric and temperature changes and the effect of cloud cover on the data. To test the effect of environmental conditions the range-invariant anomaly detection approach is applied to calibrated radiance, brightness temperature and emissivity data.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123512010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Parzen windows classifiers 在Parzen窗口分类器上
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041924
Jing Peng, G. Seetharaman
Parzen Windows classifiers have been applied to a variety of density estimation as well as classification tasks with considerable success. Parzen Windows are known to converge in the asymptotic limit. However, there is a lack of theoretical analysis on their performance with finite samples. In this paper we show a connection between Parzen Windows and the regularized least squares algorithm, which has a well-established foundation in computational learning theory. This connection allows us to provide useful insight into Parzen Windows classifiers and their performance in finite sample settings. Finally, we show empirical results on the performance of Parzen Windows classifiers using a number of real data sets.
Parzen Windows分类器已经应用于各种密度估计和分类任务,并取得了相当大的成功。已知Parzen窗口在渐近极限下收敛。然而,对于它们在有限样本下的性能,目前还缺乏理论分析。在本文中,我们展示了Parzen窗口和正则化最小二乘算法之间的联系,这在计算学习理论中有着良好的基础。这种联系使我们能够对Parzen Windows分类器及其在有限样本设置中的性能提供有用的见解。最后,我们使用大量真实数据集展示了Parzen Windows分类器性能的实证结果。
{"title":"On Parzen windows classifiers","authors":"Jing Peng, G. Seetharaman","doi":"10.1109/AIPR.2014.7041924","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041924","url":null,"abstract":"Parzen Windows classifiers have been applied to a variety of density estimation as well as classification tasks with considerable success. Parzen Windows are known to converge in the asymptotic limit. However, there is a lack of theoretical analysis on their performance with finite samples. In this paper we show a connection between Parzen Windows and the regularized least squares algorithm, which has a well-established foundation in computational learning theory. This connection allows us to provide useful insight into Parzen Windows classifiers and their performance in finite sample settings. Finally, we show empirical results on the performance of Parzen Windows classifiers using a number of real data sets.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124841976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian solutions to non-Bayesian detection problems: Unification through fusion 非贝叶斯检测问题的贝叶斯解:通过融合实现统一
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041935
A. Schaum
In 1950 Abraham Wald proved that every admissible statistical decision rule is either a Bayesian procedure or the limit of a sequence of such procedures. He thus provided a decision-theoretic justification for the use of Bayesian inference, even for non-Bayesian problems. It is often assumed that his result also justified the use of Bayesian priors to solve such problems. However, the principles one should use for defining the values of prior probabilities have been controversial for decades, especially when applied to epistemic unknowns. Now a new approach indirectly assigns values to the quantities usually interpreted as priors by imposing design constraints on a detection algorithm. No assumptions about prior "states of belief are necessary. The result shows how Wald's theorem can accommodate both Bayesian and non-Bayesian problems. The unification is mediated by the fusion of clairvoyant detectors.
1950年,亚伯拉罕·沃尔德证明了每一个可接受的统计决策规则要么是贝叶斯过程,要么是贝叶斯过程序列的极限。因此,他为贝叶斯推理的使用提供了决策理论的依据,即使对于非贝叶斯问题也是如此。人们通常认为,他的结果也证明了使用贝叶斯先验来解决这类问题是合理的。然而,定义先验概率值的原则几十年来一直存在争议,特别是在应用于认知未知时。现在,一种新的方法通过对检测算法施加设计约束,间接地为通常被解释为先验的数量赋值。没有必要对先前的“信念状态”进行假设。结果表明Wald定理可以同时适用于贝叶斯和非贝叶斯问题。这种统一是由千里眼探测器的融合介导的。
{"title":"Bayesian solutions to non-Bayesian detection problems: Unification through fusion","authors":"A. Schaum","doi":"10.1109/AIPR.2014.7041935","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041935","url":null,"abstract":"In 1950 Abraham Wald proved that every admissible statistical decision rule is either a Bayesian procedure or the limit of a sequence of such procedures. He thus provided a decision-theoretic justification for the use of Bayesian inference, even for non-Bayesian problems. It is often assumed that his result also justified the use of Bayesian priors to solve such problems. However, the principles one should use for defining the values of prior probabilities have been controversial for decades, especially when applied to epistemic unknowns. Now a new approach indirectly assigns values to the quantities usually interpreted as priors by imposing design constraints on a detection algorithm. No assumptions about prior \"states of belief are necessary. The result shows how Wald's theorem can accommodate both Bayesian and non-Bayesian problems. The unification is mediated by the fusion of clairvoyant detectors.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126382271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Rapid location of radiation sources in complex environments using optical and radiation sensors 利用光学和辐射传感器在复杂环境中快速定位辐射源
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041940
Christoph Borel-Donohue, David J. Bunker, G. Walford
Baseline radiation background is almost never known and constantly changes particularly in urban areas. It is difficult to know what the expected background radiation should be and how a radiological incident may elevate the radiation. Naturally occurring radiation from rocks and building materials often contributes significantly to measured radiation. Buildings and other tall structures also shield radiation and thus need to be taken into account. Models of natural occurring background radiation can be derived from knowledge of geology, building material origins, vegetation, and weather conditions. After a radiological incident, the radiation will be elevated near the event, and some material may be transported by mechanisms such as airborne transport and/or run-off. Locating and characterizing the sources of radiation quickly and efficiently are crucial in the immediate aftermath of a nuclear incident. The distribution of radiation sources will change naturally and also due to clean-up efforts. Finding source strengths and locations during both the initial and clean-up stages is necessary to manage and reduce contaminations. The overall objective of the Rapid Location Of Radiation Sources In Complex Environments Using Optical And Radiation research project is to design and validate gamma ray spectrum estimation algorithms that integrate optical and radiation sensor collections into high resolution, multi-modal site models for use in radiative transport codes. Our initial focus is on modeling the background radiation using hyper-spectral information from visible through the shortwave infrared sensors and thermal imagers. The optical data complements available ancillary data from other sources such as Geographic Information Systems (GIS) layers, e.g. geologic maps, terrain, surface cover type, road network, vegetation (e.g. serpentine vegetation), 3-D building models, known users of radiological sources, etc. In absence of GIS layers, the data from the multi/hyper-spectral imager and height data from LIDAR can be analyzed with special with special software to automatically create GIS layers and radiation survey data to come up with a method to predict background radiation distribution. We believe the estimation and prediction of the natural background will be helpful in finding anomalous point, line and small area sources and minimize the number of false alarms due to natural and known man-made radiation sources such as radiological medical facilities, industrial users of radiological sources.
基线辐射背景几乎从不为人所知,而且在不断变化,特别是在城市地区。很难知道预期的本底辐射应该是多少,以及放射性事件如何提高辐射。来自岩石和建筑材料的自然辐射通常对测量到的辐射有很大贡献。建筑物和其他高层结构也屏蔽辐射,因此需要考虑。自然发生的背景辐射的模型可以从地质、建筑材料来源、植被和天气条件的知识中推导出来。放射性事件发生后,辐射会在事件附近升高,一些物质可能会通过空中运输和/或径流等机制运输。在核事故发生后,迅速有效地确定辐射源的位置和特征是至关重要的。辐射源的分布会自然改变,也会由于清理工作而改变。在初始阶段和清理阶段找到来源优势和位置对于管理和减少污染是必要的。利用光学和辐射在复杂环境中快速定位辐射源研究项目的总体目标是设计和验证伽马射线谱估计算法,该算法将光学和辐射传感器集合集成到高分辨率、多模态站点模型中,用于辐射传输代码。我们最初的重点是通过短波红外传感器和热成像仪,利用可见光的高光谱信息对背景辐射进行建模。光学数据补充了其他来源的辅助数据,例如地理信息系统(GIS)层,例如地质图、地形、地表覆盖类型、道路网络、植被(例如蛇形植被)、三维建筑模型、已知放射源用户等。在没有GIS层的情况下,可以利用专用软件对多/高光谱成像仪数据和激光雷达高程数据进行分析,自动生成GIS层和辐射测量数据,提出一种预测背景辐射分布的方法。我们相信,对自然本底的估计和预测将有助于发现异常的点、线和小面积辐射源,并最大限度地减少由于自然和已知的人为辐射源(如放射医疗设施、放射源的工业用户)造成的误报次数。
{"title":"Rapid location of radiation sources in complex environments using optical and radiation sensors","authors":"Christoph Borel-Donohue, David J. Bunker, G. Walford","doi":"10.1109/AIPR.2014.7041940","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041940","url":null,"abstract":"Baseline radiation background is almost never known and constantly changes particularly in urban areas. It is difficult to know what the expected background radiation should be and how a radiological incident may elevate the radiation. Naturally occurring radiation from rocks and building materials often contributes significantly to measured radiation. Buildings and other tall structures also shield radiation and thus need to be taken into account. Models of natural occurring background radiation can be derived from knowledge of geology, building material origins, vegetation, and weather conditions. After a radiological incident, the radiation will be elevated near the event, and some material may be transported by mechanisms such as airborne transport and/or run-off. Locating and characterizing the sources of radiation quickly and efficiently are crucial in the immediate aftermath of a nuclear incident. The distribution of radiation sources will change naturally and also due to clean-up efforts. Finding source strengths and locations during both the initial and clean-up stages is necessary to manage and reduce contaminations. The overall objective of the Rapid Location Of Radiation Sources In Complex Environments Using Optical And Radiation research project is to design and validate gamma ray spectrum estimation algorithms that integrate optical and radiation sensor collections into high resolution, multi-modal site models for use in radiative transport codes. Our initial focus is on modeling the background radiation using hyper-spectral information from visible through the shortwave infrared sensors and thermal imagers. The optical data complements available ancillary data from other sources such as Geographic Information Systems (GIS) layers, e.g. geologic maps, terrain, surface cover type, road network, vegetation (e.g. serpentine vegetation), 3-D building models, known users of radiological sources, etc. In absence of GIS layers, the data from the multi/hyper-spectral imager and height data from LIDAR can be analyzed with special with special software to automatically create GIS layers and radiation survey data to come up with a method to predict background radiation distribution. We believe the estimation and prediction of the natural background will be helpful in finding anomalous point, line and small area sources and minimize the number of false alarms due to natural and known man-made radiation sources such as radiological medical facilities, industrial users of radiological sources.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125351302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Timing mark detection on nuclear detonation video 核爆视频定时标记检测
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041902
Daniel T. Schmitt, Gilbert L. Peterson
During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.
在20世纪50年代和60年代,美国进行了200多次大气层核试验并进行了拍摄,奠定了大气层核爆炸行为的基础。每次爆炸都记录了大约20个视频,从三到四个角度记录。将视频合成为3D视频将提高产量估计并减少误差因素。视频以每秒2500帧的名义捕获,但在操作过程中每秒捕获2300-3100帧。为了将它们组合成一个3D视频,各个视频帧需要在时间上相互关联。当视频被捕获时,使用一个定时系统,每5毫秒在视频中发光一次,在帧中形成一个小圆圈。本文研究了在时间标记被遮挡和冲洗的情况下,以及胶片按预期曝光情况下,从图像中提取时间的几种方法。结果表明,与过去的技术相比,该技术有所改进。对于正常视频、遮挡视频和冲洗视频,时序检测概率分别为99.3%、77.3%和88.6%,虚警率分别为2.6%、11.3%和5.9%。
{"title":"Timing mark detection on nuclear detonation video","authors":"Daniel T. Schmitt, Gilbert L. Peterson","doi":"10.1109/AIPR.2014.7041902","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041902","url":null,"abstract":"During the 1950s and 1960s the United States conducted and filmed over 200 atmospheric nuclear tests establishing the foundations of atmospheric nuclear detonation behavior. Each explosion was documented with about 20 videos from three or four points of view. Synthesizing the videos into a 3D video will improve yield estimates and reduce error factors. The videos were captured at a nominal 2500 frames per second, but range from 2300-3100 frames per second during operation. In order to combine them into one 3D video, individual video frames need to be correlated in time with each other. When the videos were captured a timing system was used that shined light in a video every 5 milliseconds creating a small circle exposed in the frame. This paper investigates several method of extracting the timing from images in the cases when the timing marks are occluded and washed out, as well as when the films are exposed as expected. Results show an improvement over past techniques. For normal videos, occluded videos, and washed out videos, timing is detected with 99.3%, 77.3%, and 88.6% probability with a 2.6%, 11.3%, 5.9% false alarm rate, respectively.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124088782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust vehicle edge detection by cross filter method 基于交叉滤波的鲁棒车辆边缘检测
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041898
K. Tang, Henry Y. T. Ngan
In visual surveillance, vehicle tracking and identification is very popular and applied in many applications such as traffic incident detection, traffic control and management. Edge detection is the key to the success of vehicle tracking and identification. Edge detection is to identify edge locations or geometrical shape changes in term of pixel value along a boundary of two regions in an image. This paper aims to investigate different edge detection methods and introduce a Cross Filter (CF) method, with a two-phase filtering approach, for vehicle images in a given database. First, four classical edge detectors namely the Canny detector, Prewitt detector, Roberts detector and Sobel detector are tested on the vehicle images. The Canny detected image is found to offer the best performance in Phase 1. In Phase 2, the robust CF, based on a spatial relationship of intensity change on edges, is applied on the Canny detected image as a second filtering process. Visual and numerical comparisons among the classical edge detectors and CF detector are also given. The average DSR of the proposed CF method on 10 vehicle images is 95.57%.
在视觉监控中,车辆的跟踪与识别非常受欢迎,并在交通事件检测、交通控制与管理等许多应用中得到了应用。边缘检测是车辆跟踪和识别成功的关键。边缘检测是利用像素值来识别图像中沿两个区域边界的边缘位置或几何形状的变化。本文旨在研究不同的边缘检测方法,并针对给定数据库中的车辆图像引入一种采用两阶段滤波方法的交叉滤波(Cross Filter, CF)方法。首先,对Canny检测器、Prewitt检测器、Roberts检测器和Sobel检测器四种经典边缘检测器在车辆图像上进行测试。Canny检测到的图像在第一阶段提供了最好的性能。在第二阶段,基于边缘强度变化的空间关系的鲁棒CF作为第二次滤波过程应用于Canny检测到的图像。并对经典边缘检测器和CF检测器进行了视觉和数值比较。该方法在10张车辆图像上的平均DSR为95.57%。
{"title":"Robust vehicle edge detection by cross filter method","authors":"K. Tang, Henry Y. T. Ngan","doi":"10.1109/AIPR.2014.7041898","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041898","url":null,"abstract":"In visual surveillance, vehicle tracking and identification is very popular and applied in many applications such as traffic incident detection, traffic control and management. Edge detection is the key to the success of vehicle tracking and identification. Edge detection is to identify edge locations or geometrical shape changes in term of pixel value along a boundary of two regions in an image. This paper aims to investigate different edge detection methods and introduce a Cross Filter (CF) method, with a two-phase filtering approach, for vehicle images in a given database. First, four classical edge detectors namely the Canny detector, Prewitt detector, Roberts detector and Sobel detector are tested on the vehicle images. The Canny detected image is found to offer the best performance in Phase 1. In Phase 2, the robust CF, based on a spatial relationship of intensity change on edges, is applied on the Canny detected image as a second filtering process. Visual and numerical comparisons among the classical edge detectors and CF detector are also given. The average DSR of the proposed CF method on 10 vehicle images is 95.57%.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115816242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhanced material identification using polarimetric hyperspectral imaging 使用偏振高光谱成像增强材料识别
Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041920
Jacob A. Martin, K. Gross
Polarimetric and hyperspectral imaging are two of the most frequently used remote sensing modalities. While extensive work has been done in both fields independently, relatively little work has been done using both in conjunction with one another. Combining these two common remote sensing techniques, we hope to estimate index of refraction, without a priori knoweledge of local weather conditions or object surface temperature. In general, this is an underdetermined problem, but modeling the spectral behavior of the index of refraction reduces the number of parameters needed to describe the index of refraction, and thus the reflectively. This allows additional scene parameters needed to describe the radiance signature from a target to be simulataneously solved for. The method uses spectrally resolved S0 and S1 radiance measurements, taken using an IFTS with a linear polarizer mounted to the front, to simultaneously solve for a materials index of refraction, surface temperature, and downwelling radiance. Measurements at multiple angles relative to the surface normal can also be taken to provide further constraining information in the fit. Results on both simulated and measured data are presented showing that this technique is largely robust to changes in object temperature.
偏振成像和高光谱成像是两种最常用的遥感方式。虽然在这两个领域分别做了大量的工作,但将两者结合使用所做的工作相对较少。结合这两种常见的遥感技术,我们希望在不先验地了解当地天气条件或物体表面温度的情况下估计折射率。一般来说,这是一个不确定的问题,但对折射率的光谱行为进行建模减少了描述折射率所需的参数数量,从而减少了反射。这允许额外的场景参数需要描述从一个目标的辐射签名被同时解决。该方法使用光谱分辨的S0和S1亮度测量,使用安装在前面的线性偏振片的IFTS,同时求解材料折射率、表面温度和下流辐射。还可以采取相对于表面法线的多角度测量,以提供进一步的拟合约束信息。仿真和实测数据的结果表明,该技术对物体温度变化具有较强的鲁棒性。
{"title":"Enhanced material identification using polarimetric hyperspectral imaging","authors":"Jacob A. Martin, K. Gross","doi":"10.1109/AIPR.2014.7041920","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041920","url":null,"abstract":"Polarimetric and hyperspectral imaging are two of the most frequently used remote sensing modalities. While extensive work has been done in both fields independently, relatively little work has been done using both in conjunction with one another. Combining these two common remote sensing techniques, we hope to estimate index of refraction, without a priori knoweledge of local weather conditions or object surface temperature. In general, this is an underdetermined problem, but modeling the spectral behavior of the index of refraction reduces the number of parameters needed to describe the index of refraction, and thus the reflectively. This allows additional scene parameters needed to describe the radiance signature from a target to be simulataneously solved for. The method uses spectrally resolved S0 and S1 radiance measurements, taken using an IFTS with a linear polarizer mounted to the front, to simultaneously solve for a materials index of refraction, surface temperature, and downwelling radiance. Measurements at multiple angles relative to the surface normal can also be taken to provide further constraining information in the fit. Results on both simulated and measured data are presented showing that this technique is largely robust to changes in object temperature.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125480558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1