首页 > 最新文献

Real-Time Imaging最新文献

英文 中文
Event detection for intelligent car park video surveillance 智能停车场视频监控中的事件检测
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.02.002
Georgios Diamantopoulos, Michael Spann

Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting tailgating, an example of complex interactions and activities within a vehicle parking scenario, using an adaptive background learning algorithm and intelligence to overcome the problems of object masking, separation and occlusion. We also show how a generalized framework may be developed for the detection of other complex events.

由于人工监控器成本高、效率低,智能监控成为一个重要的研究课题,需要机器智能来提供自动化事件检测的解决方案。在本文中,我们描述了一个用于检测尾随的实时系统,这是车辆停车场景中复杂交互和活动的一个例子,该系统使用自适应背景学习算法和智能来克服物体掩蔽、分离和遮挡问题。我们还展示了如何开发用于检测其他复杂事件的广义框架。
{"title":"Event detection for intelligent car park video surveillance","authors":"Georgios Diamantopoulos,&nbsp;Michael Spann","doi":"10.1016/j.rti.2005.02.002","DOIUrl":"10.1016/j.rti.2005.02.002","url":null,"abstract":"<div><p>Intelligent surveillance has become an important research issue due to the high cost and low efficiency of human supervisors, and machine intelligence is required to provide a solution for automated event detection. In this paper we describe a real-time system that has been used for detecting tailgating, an example of complex interactions and activities within a vehicle parking scenario, using an adaptive background learning algorithm and intelligence to overcome the problems of object masking, separation and occlusion. We also show how a generalized framework may be developed for the detection of other complex events.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 233-243"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.02.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90527390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Learning the Semantic Landscape: embedding scene knowledge in object tracking 学习语义景观:在目标跟踪中嵌入场景知识
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.002
D. Greenhill, J. Renno, J. Orwell, G.A. Jones

The accuracy of object tracking methodologies can be significantly improved by utilizing knowledge about the monitored scene. Such scene knowledge includes the homography between the camera and ground planes and the occlusion landscape identifying the depth map associated with the static occlusions in the scene. Using the ground plane, a simple method of relating the projected height and width of people objects to image location is used to constrain the dimensions of appearance models. Moreover, trajectory modeling can be greatly improved by performing tracking on the ground-plane tracking using global real-world noise models for the observation and dynamic processes. Finally, the occlusion landscape allows the tracker to predict the complete or partial occlusion of object observations. To facilitate plug and play functionality, this scene knowledge must be automatically learnt. The paper demonstrates how, over a sufficient length of time, observations from the monitored scene itself can be used to parameterize the semantic landscape.

利用被监控场景的相关知识,可以显著提高目标跟踪方法的准确性。这样的场景知识包括相机和地面平面之间的单应性,以及识别场景中与静态遮挡相关的深度图的遮挡景观。利用地平面,将人物物体的投影高度和宽度与图像位置相关联的简单方法用于约束外观模型的尺寸。此外,利用观测和动态过程的全局真实噪声模型对地平面跟踪进行跟踪,可以极大地改进轨迹建模。最后,遮挡景观允许跟踪器预测物体观测的完全或部分遮挡。为了方便即插即用功能,必须自动学习这些场景知识。本文演示了如何在足够长的时间内,从监测场景本身观察到的信息可以用于参数化语义景观。
{"title":"Learning the Semantic Landscape: embedding scene knowledge in object tracking","authors":"D. Greenhill,&nbsp;J. Renno,&nbsp;J. Orwell,&nbsp;G.A. Jones","doi":"10.1016/j.rti.2004.12.002","DOIUrl":"10.1016/j.rti.2004.12.002","url":null,"abstract":"<div><p><span>The accuracy of object tracking methodologies can be significantly improved by utilizing knowledge about the monitored scene. Such scene knowledge includes the homography between the camera and ground planes and the </span><em>occlusion landscape</em> identifying the depth map associated with the static occlusions in the scene. Using the ground plane, a simple method of relating the projected height and width of people objects to image location is used to constrain the dimensions of appearance models. Moreover, trajectory modeling can be greatly improved by performing tracking on the ground-plane tracking using global real-world noise models for the observation and dynamic processes. Finally, the <em>occlusion landscape</em><span> allows the tracker to predict the complete or partial occlusion of object observations. To facilitate </span><em>plug and play</em> functionality, this scene knowledge must be automatically learnt. The paper demonstrates how, over a sufficient length of time, observations from the monitored scene itself can be used to parameterize the <em>semantic landscape</em>.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 186-203"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81407337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Detection of cyclic human activities based on the morphological analysis of the inter-frame similarity matrix 基于帧间相似性矩阵形态学分析的循环人体活动检测
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.03.004
Alexandra Branzan Albu, Mehran Yazdi, Robert Bergevin

This paper describes a new method for the temporal segmentation of periodic human activities from continuous real-world indoor video sequences acquired with a static camera. The proposed approach is based on the concept of inter-frame similarity matrix. Indeed, this matrix contains relevant information for the analysis of cyclic and symmetric human activities, where the motion performed during the first semi-cycle is repeated in the opposite direction during the second semi-cycle. Thus, the pattern associated with a periodic activity in the similarity matrix is rectangular and decomposable into elementary units. We propose a morphology-based approach for the detection and analysis of activity patterns. Pattern extraction is further used for the detection of the temporal boundaries of the cyclic symmetric activities. The approach for experimental evaluation is based on a statistical estimation of the ground truth segmentation and on a confidence ratio for temporal segmentations.

本文描述了一种从静态摄像机采集的连续室内视频序列中提取周期性人类活动的时间分割方法。该方法基于帧间相似矩阵的概念。事实上,这个矩阵包含了分析循环和对称人类活动的相关信息,在第一个半周期中进行的运动在第二个半周期中以相反的方向重复。因此,在相似矩阵中与周期性活动相关联的模式是矩形的,并可分解为基本单元。我们提出了一种基于形态学的方法来检测和分析活动模式。模式提取进一步用于检测循环对称活动的时间边界。实验评估方法是基于对地面真值分割的统计估计和对时间分割的置信度。
{"title":"Detection of cyclic human activities based on the morphological analysis of the inter-frame similarity matrix","authors":"Alexandra Branzan Albu,&nbsp;Mehran Yazdi,&nbsp;Robert Bergevin","doi":"10.1016/j.rti.2005.03.004","DOIUrl":"10.1016/j.rti.2005.03.004","url":null,"abstract":"<div><p><span>This paper describes a new method for the temporal segmentation of periodic human activities from continuous real-world indoor video sequences acquired with a static camera. The proposed approach is based on the concept of inter-frame similarity matrix. Indeed, this matrix contains relevant information for the analysis of cyclic and symmetric human activities, where the motion performed during the first semi-cycle is repeated in the opposite direction during the second semi-cycle. Thus, the pattern associated with a periodic activity in the similarity matrix is rectangular and decomposable into elementary units. We propose a morphology-based approach for the detection and analysis of activity patterns. </span>Pattern extraction is further used for the detection of the temporal boundaries of the cyclic symmetric activities. The approach for experimental evaluation is based on a statistical estimation of the ground truth segmentation and on a confidence ratio for temporal segmentations.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 219-232"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85198800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optical flow-based real-time object tracking using non-prior training active feature model 基于非先验训练主动特征模型的光流实时目标跟踪
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.03.006
Jeongho Shin , Sangjin Kim , Sangkyu Kang , Seong-Won Lee , Joonki Paik , Besma Abidi , Mongi Abidi

This paper presents a feature-based object tracking algorithm using optical flow under the non-prior training (NPT) active feature model (AFM) framework. The proposed tracking procedure can be divided into three steps: (i) localization of an object-of-interest, (ii) prediction and correction of the object's position by utilizing spatio-temporal information, and (iii) restoration of occlusion using NPT-AFM. The proposed algorithm can track both rigid and deformable objects, and is robust against the object's sudden motion because both a feature point and the corresponding motion direction are tracked at the same time. Tracking performance is not degraded even with complicated background because feature points inside an object are completely separated from background. Finally, the AFM enables stable tracking of occluded objects with maximum 60% occlusion. NPT-AFM, which is one of the major contributions of this paper, removes the off-line, preprocessing step for generating a priori training set. The training set used for model fitting can be updated at each frame to make more robust object's features under occluded situation. The proposed AFM can track deformable, partially occluded objects by using the greatly reduced number of feature points rather than taking entire shapes in the existing shape-based methods. The on-line updating of the training set and reducing the number of feature points can realize a real-time, robust tracking system. Experiments have been performed using several in-house video clips of a static camera including objects such as a robot moving on a floor and people walking both indoor and outdoor. In order to show the performance of the proposed tracking algorithm, some experiments have been performed under noisy and low-contrast environment. For more objective comparison, PETS 2001 and PETS 2002 datasets were also used.

提出了一种在非先验训练(NPT)主动特征模型(AFM)框架下的基于光流特征的目标跟踪算法。所提出的跟踪过程可分为三个步骤:(i)定位感兴趣的目标,(ii)利用时空信息预测和校正目标的位置,以及(iii)使用NPT-AFM恢复遮挡。该算法既可以跟踪刚性物体,也可以跟踪可变形物体,同时跟踪特征点和相应的运动方向,对物体的突然运动具有鲁棒性。由于目标内部的特征点与背景完全分离,即使在复杂的背景下也不会降低跟踪性能。最后,AFM能够在最大遮挡60%的情况下稳定地跟踪被遮挡的物体。NPT-AFM是本文的主要贡献之一,它消除了生成先验训练集的离线预处理步骤。用于模型拟合的训练集可以在每一帧更新,使遮挡情况下的目标特征更加鲁棒。与现有的基于形状的方法相比,所提出的AFM可以利用大大减少的特征点数量来跟踪可变形的、部分遮挡的物体,而不是采用整个形状。在线更新训练集和减少特征点数量可以实现实时、鲁棒的跟踪系统。实验使用了几个内部静态摄像机的视频片段,包括在地板上移动的机器人和在室内和室外行走的人等物体。为了验证所提跟踪算法的性能,在噪声和低对比度环境下进行了实验。为了进行更客观的比较,还使用了PETS 2001和PETS 2002数据集。
{"title":"Optical flow-based real-time object tracking using non-prior training active feature model","authors":"Jeongho Shin ,&nbsp;Sangjin Kim ,&nbsp;Sangkyu Kang ,&nbsp;Seong-Won Lee ,&nbsp;Joonki Paik ,&nbsp;Besma Abidi ,&nbsp;Mongi Abidi","doi":"10.1016/j.rti.2005.03.006","DOIUrl":"10.1016/j.rti.2005.03.006","url":null,"abstract":"<div><p><span><span>This paper presents a feature-based object tracking algorithm using optical flow under the non-prior training (NPT) active feature model (AFM) framework. The proposed tracking procedure can be divided into three steps: (i) </span>localization of an object-of-interest, (ii) prediction and correction of the object's position by utilizing spatio-temporal information, and (iii) restoration of occlusion using NPT-AFM. The proposed algorithm can track both rigid and </span>deformable objects<span>, and is robust against the object's sudden motion because both a feature point and the corresponding motion direction are tracked at the same time. Tracking performance is not degraded even with complicated background because feature points inside an object are completely separated from background. Finally, the AFM enables stable tracking of occluded objects with maximum 60% occlusion. NPT-AFM, which is one of the major contributions of this paper, removes the off-line, preprocessing step for generating a priori training set. The training set used for model fitting can be updated at each frame to make more robust object's features under occluded situation. The proposed AFM can track deformable, partially occluded objects by using the greatly reduced number of feature points rather than taking entire shapes in the existing shape-based methods. The on-line updating of the training set and reducing the number of feature points can realize a real-time, robust tracking system. Experiments have been performed using several in-house video clips of a static camera including objects such as a robot moving on a floor and people walking both indoor and outdoor. In order to show the performance of the proposed tracking algorithm, some experiments have been performed under noisy and low-contrast environment. For more objective comparison, PETS 2001 and PETS 2002 datasets were also used.</span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 204-218"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72934453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 98
Real-time foreground–background segmentation using codebook model 基于码本模型的实时前景-背景分割
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.004
Kyungnam Kim , Thanarat H. Chalidabhongse , David Harwood , Larry Davis

We present a real-time algorithm for foreground–background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or illumination variations, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques.

In addition to the basic algorithm, two features improving the algorithm are presented—layered modeling/detection and adaptive codebook updating.

For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.

提出了一种前景-背景实时分割算法。每个像素处的样本背景值被量化为码本,码本代表长图像序列的压缩形式的背景模型。这使我们能够在有限的内存下捕获由于长时间的周期性运动而引起的结构背景变化。与其他背景建模技术相比,码本表示在内存和速度上都是有效的。我们的方法可以处理包含移动背景或光照变化的场景,并对不同类型的视频实现鲁棒检测。我们将我们的方法与其他多模建模技术进行了比较。在基本算法的基础上,提出了分层建模/检测和自适应码本更新两个改进算法的特性。为了进行性能评估,我们对四种背景减法算法和两种不同类型场景的视频进行了摄动检测率分析。
{"title":"Real-time foreground–background segmentation using codebook model","authors":"Kyungnam Kim ,&nbsp;Thanarat H. Chalidabhongse ,&nbsp;David Harwood ,&nbsp;Larry Davis","doi":"10.1016/j.rti.2004.12.004","DOIUrl":"10.1016/j.rti.2004.12.004","url":null,"abstract":"<div><p><span>We present a real-time algorithm for foreground–background segmentation. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. The codebook representation is efficient in memory and speed compared with other background modeling techniques. Our method can handle scenes containing moving backgrounds or </span>illumination variations<span>, and it achieves robust detection for different types of videos. We compared our method with other multimode modeling techniques.</span></p><p>In addition to the basic algorithm, two features improving the algorithm are presented—layered modeling/detection and adaptive codebook updating.</p><p>For performance evaluation, we have applied perturbation detection rate analysis to four background subtraction algorithms and two videos of different types of scenes.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 172-185"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.004","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90112031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1601
Rule-based real-time detection of context-independent events in video shots 基于规则的视频拍摄中与上下文无关事件的实时检测
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2004.12.001
Aishy Amer , Eric Dubois , Amar Mitiche

The purpose of this paper is to investigate a real-time system to detect context-independent events in video shots. We test the system in video surveillance environments with a fixed camera. We assume that objects have been segmented (not necessarily perfectly) and reason with their low-level features, such as shape, and mid-level features, such as trajectory, to infer events related to moving objects.

Our goal is to detect generic events, i.e., events that are independent of the context of where or how they occur. Events are detected based on a formal definition of these and on approximate but efficient world models. This is done by continually monitoring changes and behavior of features of video objects. When certain conditions are met, events are detected. We classify events into four types: primitive, action, interaction, and composite.

Our system includes three interacting video processing layers: enhancement to estimate and reduce additive noise, analysis to segment and track video objects, and interpretation to detect context-independent events. The contributions in this paper are the interpretation of spatio-temporal object features to detect context-independent events in real time, the adaptation to noise, and the correction and compensation of low-level processing errors at higher layers where more information is available.

The effectiveness and real-time response of our system are demonstrated by extensive experimentation on indoor and outdoor video shots in the presence of multi-object occlusion, different noise levels, and coding artifacts.

本文的目的是研究一个实时系统来检测视频镜头中与上下文无关的事件。我们用固定摄像机在视频监控环境中对系统进行了测试。我们假设对象已经被分割(不一定是完美的),并使用它们的低级特征(如形状)和中级特征(如轨迹)来推断与移动对象相关的事件。我们的目标是检测通用事件,即独立于发生地点或方式的上下文的事件。事件的检测是基于这些事件的正式定义和近似但有效的世界模型。这是通过持续监控视频对象特征的变化和行为来实现的。当满足某些条件时,将检测事件。我们将事件分为四种类型:原始事件、动作事件、交互事件和组合事件。我们的系统包括三个相互作用的视频处理层:增强以估计和减少附加噪声,分析以分割和跟踪视频对象,以及解释以检测与上下文无关的事件。本文的贡献包括对物体时空特征的解释,以实时检测与上下文无关的事件,对噪声的适应,以及在可获得更多信息的更高层对低级处理错误的校正和补偿。我们的系统的有效性和实时响应通过在室内和室外视频拍摄中进行大量实验来证明,这些视频拍摄存在多目标遮挡、不同噪声水平和编码伪影。
{"title":"Rule-based real-time detection of context-independent events in video shots","authors":"Aishy Amer ,&nbsp;Eric Dubois ,&nbsp;Amar Mitiche","doi":"10.1016/j.rti.2004.12.001","DOIUrl":"10.1016/j.rti.2004.12.001","url":null,"abstract":"<div><p>The purpose of this paper is to investigate a real-time system to detect context-independent events in video shots. We test the system in video surveillance environments with a fixed camera. We assume that objects have been segmented (not necessarily perfectly) and reason with their low-level features, such as shape, and mid-level features, such as trajectory, to infer events related to moving objects.</p><p>Our goal is to detect generic events, i.e., events that are independent of the context of where or how they occur. Events are detected based on a formal definition of these and on approximate but efficient world models. This is done by continually monitoring changes and behavior of features of video objects. When certain conditions are met, events are detected. We classify events into four types: primitive, action, interaction, and composite.</p><p>Our system includes three interacting video processing layers: <em>enhancement</em><span> to estimate and reduce additive noise, </span><em>analysis</em> to segment and track video objects, and <em>interpretation</em> to detect context-independent events. The contributions in this paper are the interpretation of spatio-temporal object features to detect context-independent events in real time, the adaptation to noise, and the correction and compensation of low-level processing errors at higher layers where more information is available.</p><p>The effectiveness and real-time response of our system are demonstrated by extensive experimentation on indoor and outdoor video shots in the presence of multi-object occlusion, different noise levels, and coding artifacts.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 244-256"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2004.12.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84482259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Introduction to the special issue on video object processing for surveillance applications 介绍用于监控应用的视频对象处理特刊
Pub Date : 2005-06-01 DOI: 10.1016/j.rti.2005.06.001
Aishy Amer , Carlo Regazzoni
{"title":"Introduction to the special issue on video object processing for surveillance applications","authors":"Aishy Amer ,&nbsp;Carlo Regazzoni","doi":"10.1016/j.rti.2005.06.001","DOIUrl":"10.1016/j.rti.2005.06.001","url":null,"abstract":"","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 3","pages":"Pages 167-171"},"PeriodicalIF":0.0,"publicationDate":"2005-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.06.001","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78013799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
SmartSpectra: Applying multispectral imaging to industrial environments SmartSpectra:将多光谱成像应用于工业环境
Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.04.007
Joan Vila , Javier Calpe , Filiberto Pla , Luis Gómez , Joseph Connell , John Marchant , Javier Calleja , Michael Mulqueen , Jordi Muñoz , Arnoud Klaren , The SmartSpectra Team

SmartSpectra is a smart multispectral system for industrial, environmental, and commercial applications where the use of spectral information beyond the visible range is needed. The SmartSpectra system provides six spectral bands in the range 400–1000 nm. The bands are configurable in terms of central wavelength and bandwidth by using electronic tunable filters. SmartSpectra consists of a multispectral sensor and the software that controls the system and simplifies the acquisition process. A first prototype called Autonomous Tunable Filter System is already available. This paper describes the SmartSpectra system, demonstrates its performance in the estimation of chlorophyll in plant leaves, and discusses its implications in real-time applications.

SmartSpectra是一种智能多光谱系统,适用于工业、环境和商业应用,需要使用可见光范围以外的光谱信息。SmartSpectra系统提供400 - 1000nm范围内的6个光谱波段。这些波段可以通过使用电子可调谐滤波器在中心波长和带宽方面进行配置。SmartSpectra由一个多光谱传感器和控制系统的软件组成,简化了采集过程。第一个名为自主可调滤波器系统的原型已经可用。本文介绍了SmartSpectra系统,展示了其在植物叶片叶绿素估计中的性能,并讨论了其在实时应用中的意义。
{"title":"SmartSpectra: Applying multispectral imaging to industrial environments","authors":"Joan Vila ,&nbsp;Javier Calpe ,&nbsp;Filiberto Pla ,&nbsp;Luis Gómez ,&nbsp;Joseph Connell ,&nbsp;John Marchant ,&nbsp;Javier Calleja ,&nbsp;Michael Mulqueen ,&nbsp;Jordi Muñoz ,&nbsp;Arnoud Klaren ,&nbsp;The SmartSpectra Team","doi":"10.1016/j.rti.2005.04.007","DOIUrl":"10.1016/j.rti.2005.04.007","url":null,"abstract":"<div><p><span><span>SmartSpectra is a smart multispectral system for industrial, environmental, and commercial applications where the use of spectral information beyond the visible range is needed. The SmartSpectra system provides six </span>spectral bands in the range 400–1000</span> <span>nm. The bands are configurable in terms of central wavelength and bandwidth by using electronic tunable filters. SmartSpectra consists of a multispectral sensor and the software that controls the system and simplifies the acquisition process. A first prototype called Autonomous Tunable Filter System is already available. This paper describes the SmartSpectra system, demonstrates its performance in the estimation of chlorophyll in plant leaves, and discusses its implications in real-time applications.</span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 85-98"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.04.007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85244594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Digital zooming for color filter array-based image sensors 基于彩色滤光片阵列的数字变焦图像传感器
Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.01.002
Rastislav Lukac , Konstantinos N. Plataniotis

In this paper, zooming methods which operate directly on color filter array (CFA) data are proposed, analyzed, and evaluated. Under the proposed framework enlarged spatial resolution images are generated directly from the CFA-based image sensors. The reduced computational complexity of the proposed schemes makes them ideal for real-time surveillance systems, industrial strength computer vision solutions, and mobile sensor-based visual systems. Simulation studies reported here indicate that the new methods (i) produce excellent results, in terms of both objective and subjective evaluation metrics, and (ii) outperform conventional zooming schemes operating in the RGB domain.

本文提出了直接对彩色滤波阵列(CFA)数据进行放大的方法,并对其进行了分析和评价。在该框架下,基于cfa的图像传感器直接生成放大空间分辨率的图像。所提出方案的计算复杂性降低,使其成为实时监控系统,工业强度计算机视觉解决方案和基于移动传感器的视觉系统的理想选择。本文报道的仿真研究表明,新方法(i)在客观和主观评价指标方面产生了出色的结果,并且(ii)优于在RGB域中操作的传统缩放方案。
{"title":"Digital zooming for color filter array-based image sensors","authors":"Rastislav Lukac ,&nbsp;Konstantinos N. Plataniotis","doi":"10.1016/j.rti.2005.01.002","DOIUrl":"10.1016/j.rti.2005.01.002","url":null,"abstract":"<div><p><span>In this paper, zooming methods which operate directly on color filter array (CFA) data are proposed, analyzed, and evaluated. Under the proposed framework enlarged spatial resolution images are generated directly from the CFA-based image sensors. The reduced computational complexity of the proposed schemes makes them ideal for real-time surveillance systems, industrial strength computer vision<span> solutions, and mobile sensor-based visual systems. Simulation studies reported here indicate that the new methods (i) produce excellent results, in terms of both objective and subjective evaluation metrics, and (ii) outperform conventional zooming schemes operating in the </span></span><em>RGB</em> domain.</p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 129-138"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.01.002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81670866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps 基于Kohonen图的高光谱和多光谱荧光成像数据融合的植物病害检测
Pub Date : 2005-04-01 DOI: 10.1016/j.rti.2005.03.003
D. Moshou , C. Bravo , R. Oberti , J. West , L. Bodria , A. McCartney , H. Ramon

The objective of this research was to develop a ground-based real-time remote sensing system for detecting diseases in arable crops under field conditions and in an early stage of disease development, before it can visibly be detected. This was achieved through sensor fusion of hyper-spectral reflection information between 450 and 900 nm and fluorescence imaging. The work reported here used yellow rust (Puccinia striiformis) disease of winter wheat as a model system for testing the featured technologies. Hyper-spectral reflection images of healthy and infected plants were taken with an imaging spectrograph under field circumstances and ambient lighting conditions. Multi-spectral fluorescence images were taken simultaneously on the same plants using UV-blue excitation. Through comparison of the 550 and 690 nm fluorescence images, it was possible to detect disease presence. The fraction of pixels in one image, recognized as diseased, was set as the final fluorescence disease variable called the lesion index (LI). A spectral reflection method, based on only three wavebands, was developed that could discriminate disease from healthy with an overall error of about 11.3%. The method based on fluorescence was less accurate with an overall discrimination error of about 16.5%. However, fusing the measurements from the two approaches together allowed overall disease from healthy discrimination of 94.5% by using QDA. Data fusion was also performed using a Self-Organizing Map (SOM) neural network which decreased the overall classification error to 1%. The possible implementation of the SOM-based disease classifier for rapid retraining in the field is discussed. Further, the real-time aspects of the acquisition and processing of spectral and fluorescence images are discussed. With the proposed adaptations the multi-sensor fusion disease detection system can be applied in the real-time detection of plant disease in the field.

本研究的目的是开发一种地面实时遥感系统,用于在田间条件下和疾病发展的早期阶段检测可耕种作物的疾病,然后才能明显地检测到疾病。这是通过传感器融合450和900 nm之间的高光谱反射信息和荧光成像来实现的。本文以冬小麦黄锈病为模型系统,对特色技术进行了试验研究。利用成像光谱仪在野外环境和环境光照条件下拍摄了健康植株和病株的高光谱反射图像。采用紫外-蓝激发法对同一株植物同时进行多光谱荧光成像。通过比较550 nm和690 nm的荧光图像,可以检测到疾病的存在。图像中被识别为病变的像素的比例被设置为最终的荧光疾病变量,称为病变指数(LI)。一种仅基于三个波段的光谱反射方法可以区分疾病和健康,总体误差约为11.3%。基于荧光的方法准确度较低,总体判别误差约为16.5%。然而,将两种方法的测量结果融合在一起,使用QDA可以使总体疾病与健康的区分率达到94.5%。使用自组织映射(SOM)神经网络进行数据融合,将总体分类误差降低到1%。讨论了基于som的疾病分类器在该领域快速再训练的可能实现。此外,还讨论了光谱和荧光图像的实时采集和处理。该多传感器融合病害检测系统可应用于田间植物病害的实时检测。
{"title":"Plant disease detection based on data fusion of hyper-spectral and multi-spectral fluorescence imaging using Kohonen maps","authors":"D. Moshou ,&nbsp;C. Bravo ,&nbsp;R. Oberti ,&nbsp;J. West ,&nbsp;L. Bodria ,&nbsp;A. McCartney ,&nbsp;H. Ramon","doi":"10.1016/j.rti.2005.03.003","DOIUrl":"10.1016/j.rti.2005.03.003","url":null,"abstract":"<div><p>The objective of this research was to develop a ground-based real-time remote sensing system for detecting diseases in arable crops under field conditions and in an early stage of disease development, before it can visibly be detected. This was achieved through sensor fusion of hyper-spectral reflection information between 450 and 900<!--> <!-->nm and fluorescence imaging. The work reported here used yellow rust (<em>Puccinia striiformis</em><span>) disease of winter wheat as a model system for testing the featured technologies. Hyper-spectral reflection images of healthy and infected plants were taken with an imaging spectrograph under field circumstances and ambient lighting conditions. Multi-spectral fluorescence images were taken simultaneously on the same plants using UV-blue excitation. Through comparison of the 550 and 690</span> <!-->nm fluorescence images, it was possible to detect disease presence. The fraction of pixels in one image, recognized as diseased, was set as the final fluorescence disease variable called the lesion index (<span><math><mrow><mi>LI</mi></mrow></math></span><span>). A spectral reflection method, based on only three wavebands, was developed that could discriminate disease from healthy with an overall error of about 11.3%. The method based on fluorescence was less accurate with an overall discrimination error of about 16.5%. However, fusing the measurements from the two approaches together allowed overall disease from healthy discrimination of 94.5% by using QDA. Data fusion was also performed using a Self-Organizing Map (SOM) neural network which decreased the overall classification error<span> to 1%. The possible implementation of the SOM-based disease classifier for rapid retraining in the field is discussed. Further, the real-time aspects of the acquisition and processing of spectral and fluorescence images are discussed. With the proposed adaptations the multi-sensor fusion disease detection system can be applied in the real-time detection of plant disease in the field.</span></span></p></div>","PeriodicalId":101062,"journal":{"name":"Real-Time Imaging","volume":"11 2","pages":"Pages 75-83"},"PeriodicalIF":0.0,"publicationDate":"2005-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.rti.2005.03.003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83763261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 183
期刊
Real-Time Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1