首页 > 最新文献

34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)最新文献

英文 中文
Adaptive confidence level assignment to segmented human face regions for improved face recognition 人脸分割区域自适应置信度分配改进人脸识别
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.13
Satyanadh Gundimada, V. Asari
Improving the existing face recognition technology to a higher level and to make it useful for many areas of applications including homeland security is a major challenge. Face images are prone to variations that are caused due to expressions, partial occlusions and lighting. These facial variations are responsible for the low accuracy rates of the existing face recognition techniques especially the ones that are based on linear subspace methods. A methodology to improve the accuracies of the face recognition techniques in the presence of facial variations is presented in this paper. An optical-flow method based on 'Lucas and Kanade' technique has been implemented to obtain the flow-field between the neutral face template and the test image to identify the variations. Face recognition is performed on the modularized face images rather than the whole image. A confidence level is associated with each module of the test image based on the measured amount of variation in that module. It is observed that the amount of variations within a module is proportional to the sum of the magnitudes of the optical-flow vectors within those modules. Least confidence is attached to those modules, which has the maximum sum of magnitudes of the optical-flow vectors. A K-nearest neighbor distance measure is implemented to classify each module of the test image individually after projecting it into the corresponding subspace. The confidence associated with each module is taken into consideration to calculate the total score for each training class for the classification of the test image. Analysis of the algorithm is performed with respect to two linear subspaces - PCA and LDA. A high percentage of increase in accuracy is recorded with the implementation of the proposed algorithm on available face databases when compared with other conventional methods
将现有的人脸识别技术提高到更高的水平,并使其在包括国土安全在内的许多领域应用是一个重大的挑战。面部图像容易因表情、部分遮挡和光照而发生变化。这些面部变化导致现有的人脸识别技术准确率较低,尤其是基于线性子空间方法的人脸识别技术。提出了一种在存在面部变化的情况下提高人脸识别技术准确性的方法。采用基于“Lucas and Kanade”技术的光流方法获取中性人脸模板与测试图像之间的流场,从而识别出中性人脸模板与测试图像之间的变化。人脸识别是对模块化的人脸图像进行识别,而不是对整个人脸图像进行识别。一个置信水平与测试图像的每个模块相关联,基于该模块中测量到的变化量。可以观察到,一个模块内的变化量与这些模块内光流矢量的大小之和成正比。最小置信度附加到那些具有最大的光流矢量的大小和的模块。在将测试图像投影到相应的子空间后,实现k近邻距离度量对测试图像的每个模块进行单独分类。考虑与每个模块相关联的置信度来计算每个训练类的总分,用于测试图像的分类。针对PCA和LDA两个线性子空间对该算法进行了分析。与其他传统方法相比,该算法在可用的人脸数据库上实现的准确率提高了很高的百分比
{"title":"Adaptive confidence level assignment to segmented human face regions for improved face recognition","authors":"Satyanadh Gundimada, V. Asari","doi":"10.1109/AIPR.2005.13","DOIUrl":"https://doi.org/10.1109/AIPR.2005.13","url":null,"abstract":"Improving the existing face recognition technology to a higher level and to make it useful for many areas of applications including homeland security is a major challenge. Face images are prone to variations that are caused due to expressions, partial occlusions and lighting. These facial variations are responsible for the low accuracy rates of the existing face recognition techniques especially the ones that are based on linear subspace methods. A methodology to improve the accuracies of the face recognition techniques in the presence of facial variations is presented in this paper. An optical-flow method based on 'Lucas and Kanade' technique has been implemented to obtain the flow-field between the neutral face template and the test image to identify the variations. Face recognition is performed on the modularized face images rather than the whole image. A confidence level is associated with each module of the test image based on the measured amount of variation in that module. It is observed that the amount of variations within a module is proportional to the sum of the magnitudes of the optical-flow vectors within those modules. Least confidence is attached to those modules, which has the maximum sum of magnitudes of the optical-flow vectors. A K-nearest neighbor distance measure is implemented to classify each module of the test image individually after projecting it into the corresponding subspace. The confidence associated with each module is taken into consideration to calculate the total score for each training class for the classification of the test image. Analysis of the algorithm is performed with respect to two linear subspaces - PCA and LDA. A high percentage of increase in accuracy is recorded with the implementation of the proposed algorithm on available face databases when compared with other conventional methods","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114054827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Content based object retrieval with image primitive database 基于内容的图像原语数据库对象检索
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.24
J. Kinser, Guisong Wang
Content-based image retrieval is the task of recalling images from a large database that are similar to a probe image. Many schemes have been proposed and often follow the scheme of extracting information from images and classifying this information as a single entity. We propose that image segments are far more complicated and that two adjustments are necessary. The first is that pixels do not necessarily belong to a single object and the second is that image segments can not be classified as a single entity. We propose a new approach that adopts these tenets and present results indicating the feasibility of creating syntactical definitions to image objects.
基于内容的图像检索是从大型数据库中检索与探测图像相似的图像。已经提出了许多方案,并且通常遵循从图像中提取信息并将这些信息分类为单个实体的方案。我们提出,图像片段要复杂得多,需要进行两次调整。首先是像素不一定属于单个对象,其次是图像段不能被分类为单个实体。我们提出了一种采用这些原则的新方法,并提出了表明为图像对象创建语法定义的可行性的结果。
{"title":"Content based object retrieval with image primitive database","authors":"J. Kinser, Guisong Wang","doi":"10.1109/AIPR.2005.24","DOIUrl":"https://doi.org/10.1109/AIPR.2005.24","url":null,"abstract":"Content-based image retrieval is the task of recalling images from a large database that are similar to a probe image. Many schemes have been proposed and often follow the scheme of extracting information from images and classifying this information as a single entity. We propose that image segments are far more complicated and that two adjustments are necessary. The first is that pixels do not necessarily belong to a single object and the second is that image segments can not be classified as a single entity. We propose a new approach that adopts these tenets and present results indicating the feasibility of creating syntactical definitions to image objects.","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face recognition using multispectral random field texture models, color content, and biometric features 人脸识别使用多光谱随机场纹理模型,颜色内容,和生物特征
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.28
O. Hernandez, Mitchell S. Kleiman
Most of the available research on face recognition has been performed using gray scale imagery. This paper presents a novel two-pass face recognition system that uses a multispectral random field texture model, specifically the multispectral simultaneous auto regressive (MSAR) model, and illumination invariant color features. During the first pass, the system detects and segments a face from the background of a color image, and confirms the detection based on a statistically modeled skin pixel map and the elliptical nature of human faces. In the second pass, the face regions are located using the same image segmentation approach on a subspace of the original image, biometric information, and spatial relationships. The determined facial features are then assigned biometric values based on anthropometries, and a set of vectors is created to determine similarity in the facial feature space
大多数现有的人脸识别研究都是使用灰度图像进行的。本文提出了一种利用多光谱随机场纹理模型,特别是多光谱同时自动回归(MSAR)模型和光照不变颜色特征的双通道人脸识别系统。在第一步中,系统从彩色图像的背景中检测和分割人脸,并基于统计建模的皮肤像素图和人脸的椭圆性质来确认检测。在第二步中,使用相同的图像分割方法在原始图像的子空间、生物特征信息和空间关系上定位人脸区域。然后根据人体测量为确定的面部特征分配生物特征值,并创建一组向量来确定面部特征空间中的相似性
{"title":"Face recognition using multispectral random field texture models, color content, and biometric features","authors":"O. Hernandez, Mitchell S. Kleiman","doi":"10.1109/AIPR.2005.28","DOIUrl":"https://doi.org/10.1109/AIPR.2005.28","url":null,"abstract":"Most of the available research on face recognition has been performed using gray scale imagery. This paper presents a novel two-pass face recognition system that uses a multispectral random field texture model, specifically the multispectral simultaneous auto regressive (MSAR) model, and illumination invariant color features. During the first pass, the system detects and segments a face from the background of a color image, and confirms the detection based on a statistically modeled skin pixel map and the elliptical nature of human faces. In the second pass, the face regions are located using the same image segmentation approach on a subspace of the original image, biometric information, and spatial relationships. The determined facial features are then assigned biometric values based on anthropometries, and a set of vectors is created to determine similarity in the facial feature space","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A rate distortion method for waveform design in RF image formation 一种用于射频成像波形设计的速率失真方法
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.11
R. Bonneau
Conventional RF image formation relies on a fixed waveform set that is based largely on obtaining maximum resolution for a given amount of bandwidth present in a waveform. However, the correlation process for a given waveform set varies widely depending on the cross correlation properties of the waveform and the geometry of the aperture interrogating the object to be imaged. We propose a method that maximizes quality of the imagery being reconstructed based by first using an orthogonal basis to minimize the unwanted correlation response for the waveform. We then shape the frequency and temporal correlation response of the waveform for a given target using a rate distortion criterion and demonstrate the performance of the method
传统的射频图像形成依赖于固定的波形集,该波形集主要基于在波形中给定带宽的情况下获得最大分辨率。然而,给定波形集的相关过程取决于波形的互相关特性和被成像对象的孔径几何形状。我们提出了一种方法,通过首先使用正交基来最小化波形的不必要的相关响应,从而最大限度地提高重构图像的质量。然后,我们使用速率失真准则对给定目标的波形的频率和时间相关响应进行塑造,并演示了该方法的性能
{"title":"A rate distortion method for waveform design in RF image formation","authors":"R. Bonneau","doi":"10.1109/AIPR.2005.11","DOIUrl":"https://doi.org/10.1109/AIPR.2005.11","url":null,"abstract":"Conventional RF image formation relies on a fixed waveform set that is based largely on obtaining maximum resolution for a given amount of bandwidth present in a waveform. However, the correlation process for a given waveform set varies widely depending on the cross correlation properties of the waveform and the geometry of the aperture interrogating the object to be imaged. We propose a method that maximizes quality of the imagery being reconstructed based by first using an orthogonal basis to minimize the unwanted correlation response for the waveform. We then shape the frequency and temporal correlation response of the waveform for a given target using a rate distortion criterion and demonstrate the performance of the method","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115481969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Nonlinear acoustic concealed weapons detection 非线性声学隐蔽武器探测
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.37
A. Achanta, M. McKenna, J. Heyman, K. Rudd, M. Hinders, Peter J. Costianes
The detection of concealed weapons at a distance is a critical security issue that has been a great challenge for different imaging approaches. In this paper, we discuss the use of ultrasonics in a novel way to probe for metallic and nonmetallic materials under clothing. Conventional ultrasonics has problems penetrating clothing and produces false positives from specular reflections. Our approach is to use ultrasonics to create a localized zone where nonlinear interactions generate a lower frequency acoustic wave that is able to penetrate clothing better than direct ultrasonics. The generation of a probing beam for concealed weapons is described in this brief summary showing comparisons of the physical models with the experimental data. An imaging scan of concealed improvised weapons seized by officials at corrections institutes is presented to highlight the value of this approach
远距离隐蔽武器的探测是一个关键的安全问题,对各种成像方法都是一个巨大的挑战。在本文中,我们讨论了一种利用超声波探测衣服下的金属和非金属材料的新方法。传统的超声波在穿透衣服方面存在问题,并且由于镜面反射而产生假阳性。我们的方法是使用超声波来创建一个局部区域,其中非线性相互作用产生的低频声波能够比直接超声波更好地穿透衣服。本文简要介绍了隐蔽武器探测波束的产生过程,并将物理模型与实验数据进行了比较。为了突出这一方法的价值,本文展示了狱警在监狱中缴获的隐藏简易武器的成像扫描图
{"title":"Nonlinear acoustic concealed weapons detection","authors":"A. Achanta, M. McKenna, J. Heyman, K. Rudd, M. Hinders, Peter J. Costianes","doi":"10.1109/AIPR.2005.37","DOIUrl":"https://doi.org/10.1109/AIPR.2005.37","url":null,"abstract":"The detection of concealed weapons at a distance is a critical security issue that has been a great challenge for different imaging approaches. In this paper, we discuss the use of ultrasonics in a novel way to probe for metallic and nonmetallic materials under clothing. Conventional ultrasonics has problems penetrating clothing and produces false positives from specular reflections. Our approach is to use ultrasonics to create a localized zone where nonlinear interactions generate a lower frequency acoustic wave that is able to penetrate clothing better than direct ultrasonics. The generation of a probing beam for concealed weapons is described in this brief summary showing comparisons of the physical models with the experimental data. An imaging scan of concealed improvised weapons seized by officials at corrections institutes is presented to highlight the value of this approach","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127897528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Automatic registration of multisensor airborne imagery 多传感器机载图像的自动配准
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.21
Xiaofeng Fan, H. Rhody, E. Saber
In this paper, we propose a novel technique based on maximization of mutual information (MMI) and multiresolution that is capable of automatic registration of multisensor images captured using multiple airborne cameras by utilizing maximization of mutual information, In contrast to conventional methods that extract and employ feature points, MMI-based algorithms utilize the mutual information found between two given images to compute the registration parameters. These, in turn, are then utilized to perform inter and intra sensor registration. Wavelet based techniques are also used in a multiresolution analysis framework yielding a significant increase in computational efficiency for images captured at different resolutions. Our results indicate that the proposed algorithms are very effective in registering infrared images taken at three different wavelengths with a high resolution visual image of a given scene. The techniques form the foundation of a real-time image processing pipeline for automatic geo-rectification, target detection and mapping
在本文中,我们提出了一种基于互信息最大化(MMI)和多分辨率的新技术,该技术能够利用互信息最大化来自动配准使用多个机载相机捕获的多传感器图像。与传统的提取和使用特征点的方法相比,基于MMI的算法利用两个给定图像之间的互信息来计算配准参数。这些,反过来,然后用于执行传感器之间和内部的配准。基于小波的技术也用于多分辨率分析框架,在不同分辨率下捕获的图像的计算效率显著提高。我们的研究结果表明,所提出的算法可以非常有效地将三种不同波长的红外图像与给定场景的高分辨率视觉图像配准。这些技术构成了实时图像处理流水线的基础,用于自动地源校正、目标检测和绘图
{"title":"Automatic registration of multisensor airborne imagery","authors":"Xiaofeng Fan, H. Rhody, E. Saber","doi":"10.1109/AIPR.2005.21","DOIUrl":"https://doi.org/10.1109/AIPR.2005.21","url":null,"abstract":"In this paper, we propose a novel technique based on maximization of mutual information (MMI) and multiresolution that is capable of automatic registration of multisensor images captured using multiple airborne cameras by utilizing maximization of mutual information, In contrast to conventional methods that extract and employ feature points, MMI-based algorithms utilize the mutual information found between two given images to compute the registration parameters. These, in turn, are then utilized to perform inter and intra sensor registration. Wavelet based techniques are also used in a multiresolution analysis framework yielding a significant increase in computational efficiency for images captured at different resolutions. Our results indicate that the proposed algorithms are very effective in registering infrared images taken at three different wavelengths with a high resolution visual image of a given scene. The techniques form the foundation of a real-time image processing pipeline for automatic geo-rectification, target detection and mapping","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"49 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129333127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Civilian target detection using hierarchical fusion 基于分层融合的民用目标探测
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.22
Balasubramanian Lakshminarayanan, H. Qi
Automatic target recognition (ATR) is the process of aided or unaided target detection and recognition using data from different sensors. Fusion techniques are used to improve ATR since this reduces system dependence on a single sensor and increases noise tolerance. In this work, ATR is performed on civilian targets which are considered more difficult to classify than military targets. The dataset is provided by the Night Vision & Electronic Sensors Directorate (NVESD) and is collected using the sensor fusion testbed (SFTB) developed by Northrop Grumman Mission Systems. Stationary color and infrared cameras capture images of seven different vehicles at different orientations and distances. Targets include two sedans, two SUVs, two light trucks and a heavy truck. Fusion is performed at the event level and sensor level using temporal and behavior-knowledge-space (BKS) fusion respectively. It is shown that fusion provides better and robust classification compared to classification of individual frames without fusion. The classification experiment shows, on an average, mean classification rates of 65.0%, 70.1% and 77.7% for individual frame classification, temporal fusion and BKS fusion respectively. It is demonstrated that the classification accuracy increases as the level of fusion goes higher. By combining targets into cars, SUVs and light trucks and thereby reducing the number of classes to three, higher mean classification rates of 75.4%, 90.0% and 94.8% were obtained
自动目标识别(ATR)是利用来自不同传感器的数据进行辅助或非辅助目标检测和识别的过程。融合技术用于提高ATR,因为这减少了系统对单个传感器的依赖,并提高了噪声容忍度。在这项工作中,ATR是对被认为比军事目标更难分类的民用目标进行的。该数据集由夜视和电子传感器理事会(NVESD)提供,并使用诺斯罗普·格鲁曼任务系统公司开发的传感器融合试验台(SFTB)收集。固定的彩色和红外摄像机捕捉到7辆不同车辆在不同方向和距离上的图像。目标包括两辆轿车,两辆suv,两辆轻型卡车和一辆重型卡车。在事件级和传感器级分别使用时间和行为-知识空间(BKS)融合进行融合。结果表明,与不融合的单个帧分类相比,融合提供了更好的鲁棒性分类。分类实验表明,单帧分类、时间融合和BKS融合的平均分类率分别为65.0%、70.1%和77.7%。结果表明,随着融合水平的提高,分类精度也随之提高。通过将目标细分为轿车、suv和轻型卡车,将分类数量减少到3个,平均分类率达到了75.4%、90.0%和94.8%
{"title":"Civilian target detection using hierarchical fusion","authors":"Balasubramanian Lakshminarayanan, H. Qi","doi":"10.1109/AIPR.2005.22","DOIUrl":"https://doi.org/10.1109/AIPR.2005.22","url":null,"abstract":"Automatic target recognition (ATR) is the process of aided or unaided target detection and recognition using data from different sensors. Fusion techniques are used to improve ATR since this reduces system dependence on a single sensor and increases noise tolerance. In this work, ATR is performed on civilian targets which are considered more difficult to classify than military targets. The dataset is provided by the Night Vision & Electronic Sensors Directorate (NVESD) and is collected using the sensor fusion testbed (SFTB) developed by Northrop Grumman Mission Systems. Stationary color and infrared cameras capture images of seven different vehicles at different orientations and distances. Targets include two sedans, two SUVs, two light trucks and a heavy truck. Fusion is performed at the event level and sensor level using temporal and behavior-knowledge-space (BKS) fusion respectively. It is shown that fusion provides better and robust classification compared to classification of individual frames without fusion. The classification experiment shows, on an average, mean classification rates of 65.0%, 70.1% and 77.7% for individual frame classification, temporal fusion and BKS fusion respectively. It is demonstrated that the classification accuracy increases as the level of fusion goes higher. By combining targets into cars, SUVs and light trucks and thereby reducing the number of classes to three, higher mean classification rates of 75.4%, 90.0% and 94.8% were obtained","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134240780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dual-modality imager based on ultrasonic modulation of incoherent light in turbid medium 基于超声调制浑浊介质中非相干光的双模成像仪
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.27
K. Krishnan, P. Fomitchov, Stephen J. Lomnes, M. Kollegal, F. Jansen
A hybrid imaging system capable of combining the spatial resolution of ultrasound with functional sensitivity of fluorescence optical imaging can render the conventionally ill-posed optical image reconstruction more tractable. In this paper, ultrasonic modulation of diffuse photons from an incoherent source in a turbid medium or FluoroSound has been proposed as a mechanism for dual-modality fusion. Theoretical calculations based on a diffusion approximation for tissue depths up to 2cm and reduced scattering coefficients ranging from 5-20/cm have shown show that diffuse photon modulation increases with scattering, decreases with absorption, reaches a minimum when the acoustic focus is located mid-way between the source and the detector; and can be optimized by a suitably shaped and sized acoustic focus. The diffuse photon modulation signature could potentially be used for improving spatial resolution of deep tissue fluorescence imaging and enable fusion of ultrasound and optical imaging in a single measurement
将超声的空间分辨率与荧光光学成像的功能灵敏度相结合的混合成像系统可以使传统的病态光学图像重建变得更加容易。本文提出了一种双模态融合的机制,即超声调制来自浑浊介质或荧光声中的非相干源的漫射光子。基于组织深度达2cm的扩散近似和减少的散射系数范围为5-20/cm的理论计算表明,漫射光子调制随着散射而增加,随着吸收而减少,当声焦点位于源和检测器之间的中间位置时达到最小值;并且可以通过适当形状和大小的声焦点进行优化。漫射光子调制特征可能用于提高深部组织荧光成像的空间分辨率,并实现超声和光学成像在一次测量中的融合
{"title":"Dual-modality imager based on ultrasonic modulation of incoherent light in turbid medium","authors":"K. Krishnan, P. Fomitchov, Stephen J. Lomnes, M. Kollegal, F. Jansen","doi":"10.1109/AIPR.2005.27","DOIUrl":"https://doi.org/10.1109/AIPR.2005.27","url":null,"abstract":"A hybrid imaging system capable of combining the spatial resolution of ultrasound with functional sensitivity of fluorescence optical imaging can render the conventionally ill-posed optical image reconstruction more tractable. In this paper, ultrasonic modulation of diffuse photons from an incoherent source in a turbid medium or FluoroSound has been proposed as a mechanism for dual-modality fusion. Theoretical calculations based on a diffusion approximation for tissue depths up to 2cm and reduced scattering coefficients ranging from 5-20/cm have shown show that diffuse photon modulation increases with scattering, decreases with absorption, reaches a minimum when the acoustic focus is located mid-way between the source and the detector; and can be optimized by a suitably shaped and sized acoustic focus. The diffuse photon modulation signature could potentially be used for improving spatial resolution of deep tissue fluorescence imaging and enable fusion of ultrasound and optical imaging in a single measurement","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129467274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constrained optimal interferometric imaging of extended objects 扩展对象的约束最优干涉成像
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.23
R. Rao, B. Himed
Correlative interferometry has been proposed for terahertz imaging in applications such as standoff detection of concealed explosives. It offers the advantage of low cost image reconstruction in the pupil plane given the current unavailability of ready-to-use focal plane imaging technology at these wavelengths. Current interferometric approaches are based on finding the inverse Fourier transform of the spatial correlation computed from field measurements at a distance. This paper proposes an image reconstruction approach that provides a constrained least squares fit between computed autocorrelation from sensor measurements and the expression for the far field autocorrelation for extended objects and line arrays
相关干涉测量已被提出用于太赫兹成像,如隐蔽爆炸物的对峙探测。鉴于目前在这些波长的焦平面成像技术不可用,它提供了低成本的瞳孔平面图像重建的优势。当前的干涉测量方法是基于在一定距离内从现场测量中计算出空间相关性的傅里叶反变换。本文提出了一种图像重建方法,该方法在传感器测量计算的自相关与扩展对象和线阵列的远场自相关表达式之间提供约束最小二乘拟合
{"title":"Constrained optimal interferometric imaging of extended objects","authors":"R. Rao, B. Himed","doi":"10.1109/AIPR.2005.23","DOIUrl":"https://doi.org/10.1109/AIPR.2005.23","url":null,"abstract":"Correlative interferometry has been proposed for terahertz imaging in applications such as standoff detection of concealed explosives. It offers the advantage of low cost image reconstruction in the pupil plane given the current unavailability of ready-to-use focal plane imaging technology at these wavelengths. Current interferometric approaches are based on finding the inverse Fourier transform of the spatial correlation computed from field measurements at a distance. This paper proposes an image reconstruction approach that provides a constrained least squares fit between computed autocorrelation from sensor measurements and the expression for the far field autocorrelation for extended objects and line arrays","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132979086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic inspection system using machine vision 采用机器视觉的自动检测系统
Pub Date : 2005-10-19 DOI: 10.1109/AIPR.2005.20
U. S. Khan, J. Iqbal, Mahmood A. Khan
Man from the beginning of time, tried to automate things for comfort, accuracy, precision and speed. Technology advanced from manual to mechanical and then from mechanical to automatic. Vision based applications are the products of the future. Machine vision systems integrate electronic components with software systems to imitate a variety of human functions. This paper describes current research on a vision based inspection system. A computer using a camera as an eye has replaced the manual inspection system. The camera is mounted on a conveyor belt. The main objective is to inspect for defects, instead of using complicated filters like edge enhancement, and correlation etc. a very simple technique has been implemented. Since the objects are moving over the conveyor belt so time is a factor that should be counted for. Using filters or correlation procedures give better results but consume a lot of time. The technique discussed in this paper inspects on the basic pixel level. It checks on the basis of size, shape, color and dimensions. We have implemented it on five applications and the results achieved were good enough to prove that the algorithm works as desired
人类从一开始就试图将事物自动化,以获得舒适、准确、精确和速度。技术由手动发展到机械,再由机械发展到自动。基于视觉的应用是未来的产品。机器视觉系统将电子元件与软件系统集成在一起,以模仿各种人类功能。本文介绍了基于视觉的检测系统的研究现状。以相机为眼睛的计算机代替了人工检查系统。摄像机安装在传送带上。主要目的是检查缺陷,而不是使用复杂的过滤器,如边缘增强和相关等,一个非常简单的技术已经实现。由于物体在传送带上移动,所以时间是一个应该计算在内的因素。使用过滤器或相关程序会得到更好的结果,但会消耗大量时间。本文讨论的技术是在基本像素水平上进行检测的。它根据尺寸、形状、颜色和尺寸进行检查。我们已经在五个应用程序上实现了该算法,并且取得了足够好的结果,证明该算法可以正常工作
{"title":"Automatic inspection system using machine vision","authors":"U. S. Khan, J. Iqbal, Mahmood A. Khan","doi":"10.1109/AIPR.2005.20","DOIUrl":"https://doi.org/10.1109/AIPR.2005.20","url":null,"abstract":"Man from the beginning of time, tried to automate things for comfort, accuracy, precision and speed. Technology advanced from manual to mechanical and then from mechanical to automatic. Vision based applications are the products of the future. Machine vision systems integrate electronic components with software systems to imitate a variety of human functions. This paper describes current research on a vision based inspection system. A computer using a camera as an eye has replaced the manual inspection system. The camera is mounted on a conveyor belt. The main objective is to inspect for defects, instead of using complicated filters like edge enhancement, and correlation etc. a very simple technique has been implemented. Since the objects are moving over the conveyor belt so time is a factor that should be counted for. Using filters or correlation procedures give better results but consume a lot of time. The technique discussed in this paper inspects on the basic pixel level. It checks on the basis of size, shape, color and dimensions. We have implemented it on five applications and the results achieved were good enough to prove that the algorithm works as desired","PeriodicalId":130204,"journal":{"name":"34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116074023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
34th Applied Imagery and Pattern Recognition Workshop (AIPR'05)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1