首页 > 最新文献

32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.最新文献

英文 中文
Tracking and handoff between multiple perspective camera views 跟踪和切换之间的多视角相机视图
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284284
S. Guler, John M. Griffith, Ian A. Pushee
We present a system for tracking objects between multiple uncalibrated widely varying perspective view cameras. The spatial relationships between multiple perspective views are established using a simple setup by using tracks of objects moving in and out of individual camera views. A parameterized Edge of Field of View (EoFOV) map augmented with internal overlap region boundaries is generated based on the detected object trajectories in each view. This EoFOV map is then used to associate multiple objects entering and leaving a particular camera's FOV into and out of another camera view providing uninterrupted object tracking between multiple cameras. The main focus of the paper is robust tracking and handoff of objects between omni-directional and regular narrow FOV surveillance video cameras without the need for formal camera calibration. The system tracks objects in both omni-directional and narrow field camera views employing adaptive background subtraction followed by foreground object segmentation using gradient and region correspondence.
我们提出了一个在多个未校准的大范围视角摄像机之间跟踪目标的系统。多个视角视图之间的空间关系是通过使用物体在单个相机视图中移动的轨迹来建立的。基于每个视图中检测到的目标轨迹,生成带有内部重叠区域边界的参数化视场边缘(EoFOV)地图。这个EoFOV地图然后用于关联多个物体进入和离开特定相机的FOV进入和离开另一个相机视图,提供多个相机之间不间断的物体跟踪。本文的研究重点是在不需要正式摄像机标定的情况下,实现全向和常规窄视场监控摄像机之间目标的鲁棒跟踪和切换。该系统采用自适应背景减法,然后使用梯度和区域对应进行前景目标分割,在全向和窄场摄像机视图中跟踪目标。
{"title":"Tracking and handoff between multiple perspective camera views","authors":"S. Guler, John M. Griffith, Ian A. Pushee","doi":"10.1109/AIPR.2003.1284284","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284284","url":null,"abstract":"We present a system for tracking objects between multiple uncalibrated widely varying perspective view cameras. The spatial relationships between multiple perspective views are established using a simple setup by using tracks of objects moving in and out of individual camera views. A parameterized Edge of Field of View (EoFOV) map augmented with internal overlap region boundaries is generated based on the detected object trajectories in each view. This EoFOV map is then used to associate multiple objects entering and leaving a particular camera's FOV into and out of another camera view providing uninterrupted object tracking between multiple cameras. The main focus of the paper is robust tracking and handoff of objects between omni-directional and regular narrow FOV surveillance video cameras without the need for formal camera calibration. The system tracks objects in both omni-directional and narrow field camera views employing adaptive background subtraction followed by foreground object segmentation using gradient and region correspondence.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126746048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Photo-realistic representation of anatomical structures for medical education by fusion of volumetric and surface image data 基于体积和表面图像数据融合的医学教育解剖结构的逼真表现
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284261
Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield
We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.
我们将解剖学表面的摄影图像数据与光学、CT和MRI体积数据(如NLM可见人体项目提供的数据)相结合,为医学教育制作了改进的逼真的解剖结构视图。体积数据包含构建解剖结构的三维几何模型所需的信息,但不能提供表面的真实外观。尼德在一系列旋转角度上拍摄了高质量的解剖标本摄影序列。这些已经组装成QuickTime VR对象电影,可以静态或动态地查看。我们重用这些表面图像来为3D解剖模型生成纹理和表面反射率图,以允许从任何方向和光照条件下观看。因为体积数据来自不同的个体,而不是表面图像,我们必须扭曲这些数据,使其对齐。目前,我们不使用结构化照明或其他直接的3D表面信息,所以表面形状是从使用轮廓和纹理关联的旋转序列中恢复的。这项工作的结果改善了模型的外观和通用性,用于解剖学教学与PSC卷浏览器。
{"title":"Photo-realistic representation of anatomical structures for medical education by fusion of volumetric and surface image data","authors":"Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield","doi":"10.1109/AIPR.2003.1284261","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284261","url":null,"abstract":"We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Associative memory based on ratio learning for real time skin color detection 基于比例学习的联想记忆实时肤色检测
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284264
Ming-Jung Seow, V. Asari
A novel approach for skin color modeling using ratio rule learning algorithm is proposed in this paper. The learning algorithm is applied to a real time skin color detection application. The neural network learn, based on the degree of similarity between the relative magnitudes of the output of each neuron with respect to that of all other neurons. The activation/threshold function of the network is determined by the statistical characteristic of the input patterns. Theoretical analysis has shown that the network is able to learn and recall the trained patterns without much problem. It is shown mathematically that the network system is stable and converges in all circumstances for the trained patterns. The network utilizes the ratio-learning algorithm for modeling the characteristic of skin color in the RGB space as a linear attractor. The skin color will converge to a line of attraction. The new technique is applied to images captured by a surveillance camera and it is observed that the skin color model is capable of processing 420/spl times/315 resolution images of 24-bit color at 30 frames per second in a dual Xeon 2.2 GHz CPU workstation running Windows 2000.
提出了一种利用比例规则学习算法进行肤色建模的新方法。将该学习算法应用于一个实时肤色检测应用中。神经网络根据每个神经元相对于所有其他神经元输出的相对大小之间的相似性进行学习。网络的激活/阈值函数由输入模式的统计特性决定。理论分析表明,该网络能够毫无问题地学习和回忆训练好的模式。数学上证明了对于训练好的模式,网络系统在任何情况下都是稳定和收敛的。该网络利用比率学习算法将RGB空间中的肤色特征建模为线性吸引子。肤色会汇聚成一条吸引线。将该技术应用于监控摄像机拍摄的图像,结果表明,该肤色模型能够在运行Windows 2000的双Xeon 2.2 GHz CPU工作站上以每秒30帧的速度处理420/spl次/315分辨率的24位彩色图像。
{"title":"Associative memory based on ratio learning for real time skin color detection","authors":"Ming-Jung Seow, V. Asari","doi":"10.1109/AIPR.2003.1284264","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284264","url":null,"abstract":"A novel approach for skin color modeling using ratio rule learning algorithm is proposed in this paper. The learning algorithm is applied to a real time skin color detection application. The neural network learn, based on the degree of similarity between the relative magnitudes of the output of each neuron with respect to that of all other neurons. The activation/threshold function of the network is determined by the statistical characteristic of the input patterns. Theoretical analysis has shown that the network is able to learn and recall the trained patterns without much problem. It is shown mathematically that the network system is stable and converges in all circumstances for the trained patterns. The network utilizes the ratio-learning algorithm for modeling the characteristic of skin color in the RGB space as a linear attractor. The skin color will converge to a line of attraction. The new technique is applied to images captured by a surveillance camera and it is observed that the skin color model is capable of processing 420/spl times/315 resolution images of 24-bit color at 30 frames per second in a dual Xeon 2.2 GHz CPU workstation running Windows 2000.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130708380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Imaging of moving targets using a Doppler compensated multiresolution method 运动目标的多普勒补偿多分辨率成像方法
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284251
R. Bonneau
Traditional radar imaging has difficulties in imaging moving targets due to Doppler shifts induced in the imagery and limited spatial resolution of the target. We propose a method that uses a multiresolution processing technique that sharpens the ambiguity function of moving objects to remove Doppler induced imaging errors and improves instantaneous resolution. This method allows for instantaneous imaging of both static an moving objects in a computationally efficient manner thereby allowing more real time radar imagery generation.
传统的雷达成像由于图像中的多普勒频移和目标空间分辨率的限制,在对运动目标成像时存在一定的困难。我们提出了一种利用多分辨率处理技术锐化运动目标的模糊函数来消除多普勒诱导的成像误差并提高瞬时分辨率的方法。该方法允许以计算高效的方式对静态和运动物体进行瞬时成像,从而允许更实时的雷达图像生成。
{"title":"Imaging of moving targets using a Doppler compensated multiresolution method","authors":"R. Bonneau","doi":"10.1109/AIPR.2003.1284251","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284251","url":null,"abstract":"Traditional radar imaging has difficulties in imaging moving targets due to Doppler shifts induced in the imagery and limited spatial resolution of the target. We propose a method that uses a multiresolution processing technique that sharpens the ambiguity function of moving objects to remove Doppler induced imaging errors and improves instantaneous resolution. This method allows for instantaneous imaging of both static an moving objects in a computationally efficient manner thereby allowing more real time radar imagery generation.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133508835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Image formation through walls using a distributed radar sensor array 利用分布式雷达传感器阵列通过墙壁形成图像
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284277
A. Hunt
Through the wall surveillance is a difficult but important problem for both law enforcement and military personnel. Getting information on both the internal features of a structure and the location of people inside improves the operational effectiveness in search and rescue, hostage, and barricade situations. However, the electromagnetic properties of walls constrain the choices available as sensor candidates. We have demonstrated that a high range resolution radar operating between 450 MHz and 2 GHz can be used with a fixed linear array of antennas to produce images and detect motion through both interior and exterior walls. While the experimental results are good, it has been shown that the linear array causes signal processing artifacts that appear as ghosts in the resultant images. By moving toward a sensor concept where the antennas in the array are randomly spaced, the effect of ghost images can be reduced and operational and performance benefits gained.
穿墙监视对执法人员和军事人员来说都是一个困难但重要的问题。获取建筑物内部特征和内部人员位置的信息可以提高搜救、人质和路障情况下的行动效率。然而,墙壁的电磁特性限制了可用的传感器候选选择。我们已经证明,工作在450 MHz和2 GHz之间的高距离分辨率雷达可以与固定的线性天线阵列一起使用,以产生图像并检测穿过内墙和外墙的运动。虽然实验结果很好,但已经表明线性阵列会导致信号处理伪影,这些伪影在合成图像中表现为幽灵。通过向传感器概念移动,阵列中的天线是随机间隔的,可以减少鬼像的影响,并获得操作和性能方面的好处。
{"title":"Image formation through walls using a distributed radar sensor array","authors":"A. Hunt","doi":"10.1109/AIPR.2003.1284277","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284277","url":null,"abstract":"Through the wall surveillance is a difficult but important problem for both law enforcement and military personnel. Getting information on both the internal features of a structure and the location of people inside improves the operational effectiveness in search and rescue, hostage, and barricade situations. However, the electromagnetic properties of walls constrain the choices available as sensor candidates. We have demonstrated that a high range resolution radar operating between 450 MHz and 2 GHz can be used with a fixed linear array of antennas to produce images and detect motion through both interior and exterior walls. While the experimental results are good, it has been shown that the linear array causes signal processing artifacts that appear as ghosts in the resultant images. By moving toward a sensor concept where the antennas in the array are randomly spaced, the effect of ghost images can be reduced and operational and performance benefits gained.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133368585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Eigenviews for object recognition in multispectral imaging systems 多光谱成像系统中目标识别的特征图
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284245
R. Ramanath, W. Snyder, H. Qi
We address the problem of representing multispectral images of objects using eigenviews for recognition purposes. Eigenviews have long been used for object recognition and pose estimation purposes in the grayscale and color image settings. The purpose of this paper is two-fold: firstly to extend the idealogies of eigenviews to multispectral images and secondly to propose the use of dimensionality reduction techniques other than those popularly used. Principal Component Analysis (PCA) and its various kernel-based flavors are popularly used to extract eigenviews. We propose the use of Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) as possible candidates for eigenview extraction. Multispectral images of a collection of 3D objects captured under different viewpoint locations are used to obtain representative views (eigenviews) that encode the information in these images. The idea is illustrated with a collection of eight synthetic objects imaged in both reflection and emission bands. A Nearest Neighbor classifier is used to perform the classification of an arbitrary view of an object. Classifier performance under additive white Gaussian noise is also tested. The results demonstrate that this system holds promise for use in object recognition under the multispectral imaging setting and also for novel dimensionality reduction techniques. The number of eigenviews needed by various techniques to obtain a given classifier accuracy is also calculated as a measure of the performance of the dimensionality reduction technique.
我们解决了使用特征视图表示物体多光谱图像的问题,以实现识别目的。特征视图一直被用于灰度和彩色图像的目标识别和姿态估计。本文的目的有两个:首先,将特征图的思想扩展到多光谱图像,其次,提出使用除常用的降维技术之外的其他降维技术。主成分分析(PCA)及其各种基于核的方法被广泛用于提取特征图。我们建议使用独立成分分析(ICA)和非负矩阵分解(NMF)作为特征视图提取的可能候选。使用在不同视点位置捕获的3D物体集合的多光谱图像来获得代表性视图(特征视图),该视图对这些图像中的信息进行编码。这个想法是用八个合成物体在反射和发射波段成像的集合来说明的。最近邻分类器用于对对象的任意视图进行分类。对加性高斯白噪声下分类器的性能进行了测试。结果表明,该系统在多光谱成像环境下的目标识别和新的降维技术中具有广阔的应用前景。为了获得给定的分类器精度,还计算了各种技术所需的特征视图的数量,作为降维技术性能的度量。
{"title":"Eigenviews for object recognition in multispectral imaging systems","authors":"R. Ramanath, W. Snyder, H. Qi","doi":"10.1109/AIPR.2003.1284245","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284245","url":null,"abstract":"We address the problem of representing multispectral images of objects using eigenviews for recognition purposes. Eigenviews have long been used for object recognition and pose estimation purposes in the grayscale and color image settings. The purpose of this paper is two-fold: firstly to extend the idealogies of eigenviews to multispectral images and secondly to propose the use of dimensionality reduction techniques other than those popularly used. Principal Component Analysis (PCA) and its various kernel-based flavors are popularly used to extract eigenviews. We propose the use of Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) as possible candidates for eigenview extraction. Multispectral images of a collection of 3D objects captured under different viewpoint locations are used to obtain representative views (eigenviews) that encode the information in these images. The idea is illustrated with a collection of eight synthetic objects imaged in both reflection and emission bands. A Nearest Neighbor classifier is used to perform the classification of an arbitrary view of an object. Classifier performance under additive white Gaussian noise is also tested. The results demonstrate that this system holds promise for use in object recognition under the multispectral imaging setting and also for novel dimensionality reduction techniques. The number of eigenviews needed by various techniques to obtain a given classifier accuracy is also calculated as a measure of the performance of the dimensionality reduction technique.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"18 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116091850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A survey of recent developments in theoretical neuroscience and machine vision 理论神经科学和机器视觉的最新发展综述
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284273
J. Colombe
Efforts to explain human and animal vision, and to automate visual function in machines, have found it difficult to account for the view-invariant perception of universals such as environmental objects or processes, and the explicit perception of featural parts and wholes in visual scenes. A handful of unsupervised learning methods, many of which relate directly to independent components analysis (ICA), have been used to make predictive perceptual models of the spatial and temporal statistical structure in natural visual scenes, and to develop principled explanations for several important properties of the architecture and dynamics of mammalian visual cortex. Emerging principles include a new understanding of invariances and part-whole compositions in terms of the hierarchical analysis of covariation in feature subspaces, reminiscent of the processing across layers and areas of visual cortex, and the analysis of view manifolds, which relate to the topologically ordered feature maps in cortex.
解释人类和动物视觉以及机器视觉功能自动化的努力发现,很难解释对环境对象或过程等共性的视图不变感知,以及对视觉场景中特征部分和整体的明确感知。一些无监督学习方法,其中许多与独立成分分析(ICA)直接相关,已被用于建立自然视觉场景中时空统计结构的预测感知模型,并为哺乳动物视觉皮层的结构和动态的几个重要特性提供原则解释。新兴的原理包括对特征子空间中共变的层次分析中的不变性和部分-整体组成的新理解,使人联想到视觉皮层的跨层和区域的处理,以及与皮层中拓扑有序的特征映射相关的视图流形的分析。
{"title":"A survey of recent developments in theoretical neuroscience and machine vision","authors":"J. Colombe","doi":"10.1109/AIPR.2003.1284273","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284273","url":null,"abstract":"Efforts to explain human and animal vision, and to automate visual function in machines, have found it difficult to account for the view-invariant perception of universals such as environmental objects or processes, and the explicit perception of featural parts and wholes in visual scenes. A handful of unsupervised learning methods, many of which relate directly to independent components analysis (ICA), have been used to make predictive perceptual models of the spatial and temporal statistical structure in natural visual scenes, and to develop principled explanations for several important properties of the architecture and dynamics of mammalian visual cortex. Emerging principles include a new understanding of invariances and part-whole compositions in terms of the hierarchical analysis of covariation in feature subspaces, reminiscent of the processing across layers and areas of visual cortex, and the analysis of view manifolds, which relate to the topologically ordered feature maps in cortex.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123049896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A real-time wide field of view passive millimeter-wave imaging camera 实时宽视场无源毫米波成像相机
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284280
S. Clark, C. Martin, Peter J. Costianes, V. Kolinko, J. Lovberg
With the current upsurge in domestic terrorism, suicide bombings and the like, there is an increased interest in high technology sensors that can provide true stand-off detection of concealed articles such as guns and, in particular, explosives in both controlled and uncontrolled areas. The camera discussed in this paper is based upon passive millimeter-wave imaging (75.5-93.5 GHz) and is intrinsically safe as it uses only the natural thermal (blackbody) emissions from living beings and inanimate objects to form images with. The camera consists of four subsystems which are interfaced to complete the final camera. The subsystems are Trex's patented flat panel frequency scanned phased array antenna, a front end receiver, and phase and frequency processors to convert the antenna output (in phase and frequency space) into image space and in doing so form a readily recognizable image. The phase and frequency processors are based upon variants of a Rotman lens.
随着目前国内恐怖主义、自杀式爆炸等事件的激增,人们对高科技传感器的兴趣越来越大,这些传感器可以在受控和非受控地区对枪支等隐藏物品,特别是爆炸物进行真正的远距离探测。本文讨论的相机是基于被动毫米波成像(75.5-93.5 GHz),本质上是安全的,因为它只使用生物和无生命物体的自然热(黑体)发射来形成图像。摄像机由四个子系统组成,这些子系统相互连接以完成最终的摄像机。子系统是Trex专利的平板频率扫描相控阵天线,前端接收器,相位和频率处理器,用于将天线输出(相位和频率空间)转换为图像空间,并在此过程中形成易于识别的图像。相位和频率处理器是基于罗特曼透镜的变体。
{"title":"A real-time wide field of view passive millimeter-wave imaging camera","authors":"S. Clark, C. Martin, Peter J. Costianes, V. Kolinko, J. Lovberg","doi":"10.1109/AIPR.2003.1284280","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284280","url":null,"abstract":"With the current upsurge in domestic terrorism, suicide bombings and the like, there is an increased interest in high technology sensors that can provide true stand-off detection of concealed articles such as guns and, in particular, explosives in both controlled and uncontrolled areas. The camera discussed in this paper is based upon passive millimeter-wave imaging (75.5-93.5 GHz) and is intrinsically safe as it uses only the natural thermal (blackbody) emissions from living beings and inanimate objects to form images with. The camera consists of four subsystems which are interfaced to complete the final camera. The subsystems are Trex's patented flat panel frequency scanned phased array antenna, a front end receiver, and phase and frequency processors to convert the antenna output (in phase and frequency space) into image space and in doing so form a readily recognizable image. The phase and frequency processors are based upon variants of a Rotman lens.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117269551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Stereo mosaics with slanting parallel projections from many cameras or a moving camera 立体马赛克与倾斜平行投影从许多相机或移动相机
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284282
Zhigang Zhu
This paper presents an approach of fusing images from many video cameras or a moving video camera with external orientation data (e.g. GPS and INS data) into a few mosaicked images that preserve 3D information. In both cases, a virtual 2D array of cameras with FOV overlaps is formed to generate the whole coverage of a scene (or an object). We propose a representation that can re-organize the original perspective images into a set of parallel projections with different slanting viewing angles. In addition to providing a wide field of view, there are two more benefits of such a representation. First, mosaics with different slanting views represent occlusions encountered in a usual nadir view. Second, stereo pair can be formed from a pair of slanting parallel mosaics thus image-based 3D viewing can be achieved. This representation can be used as both an advanced video interface for surveillance or a pre-processing for 3D reconstruction.
本文提出了一种方法,将来自多个摄像机或移动摄像机的图像与外部方向数据(例如GPS和INS数据)融合成几个马赛克图像,以保留3D信息。在这两种情况下,形成一个具有视场重叠的虚拟2D摄像机阵列,以生成场景(或物体)的整个覆盖范围。我们提出了一种表示方法,可以将原始透视图像重新组织成一组具有不同倾斜视角的平行投影。除了提供广阔的视野之外,这种表示还有两个好处。首先,具有不同倾斜视图的马赛克表示在通常的最低点视图中遇到的闭塞。其次,可以由一对倾斜的平行拼接形成立体对,从而实现基于图像的三维观看。这种表示既可以用作高级视频监控接口,也可以用作3D重建的预处理。
{"title":"Stereo mosaics with slanting parallel projections from many cameras or a moving camera","authors":"Zhigang Zhu","doi":"10.1109/AIPR.2003.1284282","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284282","url":null,"abstract":"This paper presents an approach of fusing images from many video cameras or a moving video camera with external orientation data (e.g. GPS and INS data) into a few mosaicked images that preserve 3D information. In both cases, a virtual 2D array of cameras with FOV overlaps is formed to generate the whole coverage of a scene (or an object). We propose a representation that can re-organize the original perspective images into a set of parallel projections with different slanting viewing angles. In addition to providing a wide field of view, there are two more benefits of such a representation. First, mosaics with different slanting views represent occlusions encountered in a usual nadir view. Second, stereo pair can be formed from a pair of slanting parallel mosaics thus image-based 3D viewing can be achieved. This representation can be used as both an advanced video interface for surveillance or a pre-processing for 3D reconstruction.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125379132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Visual literacy: an overview 视觉素养:概述
Pub Date : 2003-10-15 DOI: 10.1109/AIPR.2003.1284270
J. Aanstoos
Visual literacy may be defined as the ability to recognize and understand ideas conveyed through visible actions or images, as well as to be able to convey ideas or messages through imagery. Based on the idea that visual images are a language, some authors consider visual literacy to be more of a metaphor, relating imagery interpretation to conventional literacy, than a well-defined and teachable skill. However, the field is credited with the development of educational programs that enhance students' abilities to interpret and create visual messages, as well as improvement of reading and writing skills through the use of visual imagery. This paper presents a broad overview of the concept of field literacy, focusing on its interdisciplinary nature and varied points of view.
视觉素养可以定义为识别和理解通过可见动作或图像传达的思想的能力,以及能够通过图像传达思想或信息的能力。基于视觉图像是一种语言的观点,一些作者认为视觉素养更多的是一种隐喻,将图像解释与传统识字联系起来,而不是一种定义明确、可教的技能。然而,该领域被认为是教育项目的发展,提高了学生解释和创造视觉信息的能力,以及通过使用视觉图像提高阅读和写作技能。本文概述了田野素养的概念,着重于其跨学科的性质和不同的观点。
{"title":"Visual literacy: an overview","authors":"J. Aanstoos","doi":"10.1109/AIPR.2003.1284270","DOIUrl":"https://doi.org/10.1109/AIPR.2003.1284270","url":null,"abstract":"Visual literacy may be defined as the ability to recognize and understand ideas conveyed through visible actions or images, as well as to be able to convey ideas or messages through imagery. Based on the idea that visual images are a language, some authors consider visual literacy to be more of a metaphor, relating imagery interpretation to conventional literacy, than a well-defined and teachable skill. However, the field is credited with the development of educational programs that enhance students' abilities to interpret and create visual messages, as well as improvement of reading and writing skills through the use of visual imagery. This paper presents a broad overview of the concept of field literacy, focusing on its interdisciplinary nature and varied points of view.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125503016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1