首页 > 最新文献

2009 Digital Image Computing: Techniques and Applications最新文献

英文 中文
Evaluation of a Particle Filter to Track People for Visual Surveillance 一种用于视觉监控的粒子滤波跟踪评价
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.24
J. Sherrah, B. Ristic, N. Redding
Previously a particle filter has been proposed to detect colour objects in video [1]. In this work, the particle filter is adapted to track people in surveillance video. Detection is based on automated background modelling rather than a manually-generated object colour model. A labelling method is proposed that tracks objects through the scene rather than detecting them. A methodical comparison between the new method and two other multi-object trackers is presented on the PETS 2004 benchmark data set. The particle filter gives significantly fewer false alarms due to explicit modelling of the object birth and death processes, while maintaining a good detection rate.
先前已经提出了一种粒子滤波器来检测视频中的彩色物体[1]。本文将粒子滤波应用于监控视频中的人物跟踪。检测是基于自动背景建模,而不是手动生成的对象颜色模型。提出了一种在场景中跟踪物体而不是检测物体的标记方法。在pet 2004基准数据集上,对新方法与其他两种多目标跟踪器进行了系统比较。粒子滤波器由于对物体的出生和死亡过程进行了明确的建模,从而大大减少了假警报,同时保持了良好的检测率。
{"title":"Evaluation of a Particle Filter to Track People for Visual Surveillance","authors":"J. Sherrah, B. Ristic, N. Redding","doi":"10.1109/DICTA.2009.24","DOIUrl":"https://doi.org/10.1109/DICTA.2009.24","url":null,"abstract":"Previously a particle filter has been proposed to detect colour objects in video [1]. In this work, the particle filter is adapted to track people in surveillance video. Detection is based on automated background modelling rather than a manually-generated object colour model. A labelling method is proposed that tracks objects through the scene rather than detecting them. A methodical comparison between the new method and two other multi-object trackers is presented on the PETS 2004 benchmark data set. The particle filter gives significantly fewer false alarms due to explicit modelling of the object birth and death processes, while maintaining a good detection rate.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124724348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Quadratic Deformation Model for Facial Expression Recognition 人脸表情识别的二次变形模型
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.51
M. Obaid, R. Mukundan, Hartmut Goecke, M. Billinghurst, H. Seichter
In this paper, we propose a novel approach for recognizing facial expressions based on using an Active Appearance Model facial feature tracking system with the quadratic deformation model representations of facial expressions. Thirty seven Facial Feature points are tracked based on the MPEG-4 Facial Animation Parameters layout. The proposed approach relies on the Euclidean distance measures between the tracked feature points and the reference deformed facial feature points of the six main expressions (smile, sad, fear, disgust, surprise, and anger). An evaluation of 30 model subjects, selected randomly from the Cohn-Kanade Database, was carried out. Results show that the main six facial expressions can successfully be recognized with an overall recognition accuracy of 89%. The proposed approach yields to promising recognition rates and can be used in real time applications.
本文提出了一种基于主动外观模型面部特征跟踪系统的面部表情识别新方法,该系统采用面部表情的二次变形模型表示。基于MPEG-4人脸动画参数布局,跟踪了37个人脸特征点。该方法依赖于六种主要表情(微笑、悲伤、恐惧、厌恶、惊讶和愤怒)的跟踪特征点与参考变形面部特征点之间的欧氏距离度量。从Cohn-Kanade数据库中随机选择30名模型受试者进行评估。结果表明,该方法能够成功识别6种主要面部表情,总体识别准确率达到89%。该方法具有良好的识别率,可用于实时应用。
{"title":"A Quadratic Deformation Model for Facial Expression Recognition","authors":"M. Obaid, R. Mukundan, Hartmut Goecke, M. Billinghurst, H. Seichter","doi":"10.1109/DICTA.2009.51","DOIUrl":"https://doi.org/10.1109/DICTA.2009.51","url":null,"abstract":"In this paper, we propose a novel approach for recognizing facial expressions based on using an Active Appearance Model facial feature tracking system with the quadratic deformation model representations of facial expressions. Thirty seven Facial Feature points are tracked based on the MPEG-4 Facial Animation Parameters layout. The proposed approach relies on the Euclidean distance measures between the tracked feature points and the reference deformed facial feature points of the six main expressions (smile, sad, fear, disgust, surprise, and anger). An evaluation of 30 model subjects, selected randomly from the Cohn-Kanade Database, was carried out. Results show that the main six facial expressions can successfully be recognized with an overall recognition accuracy of 89%. The proposed approach yields to promising recognition rates and can be used in real time applications.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Portable Multi-megapixel Camera with Real-Time Recording and Playback 具有实时记录和回放功能的便携式百万像素相机
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.62
Peter Carr, R. Hartley
We are interested in the problem of automatically tracking football players, subject to the constraint that only one vantage point is available. Tracking algorithms benefit from seeing the entire playing field, as one does not have to worry about objects entering and leaving the field of view. However, the image of the entire field must be of sufficient resolution to allow each of the players to be identified automatically. To achieve this desired video data, several high definition video cameras are used to record a football match from a single vantage point. The cameras are oriented to cover the entire playing field, and their images combined to create a single high-resolution video feed. The user is able to pan and zoom in real-time within the unified video stream while it is playing. The system is achieved by distributing tasks across a network of computers and only processing data that will be visible to the user.
我们对自动跟踪足球运动员的问题感兴趣,该问题受制于只有一个有利位置可用的约束。跟踪算法受益于看到整个比赛场地,因为人们不必担心物体进入和离开视野。但是,整个场地的图像必须具有足够的分辨率,以便自动识别每个球员。为了获得所需的视频数据,使用几个高清摄像机从一个有利位置记录足球比赛。摄像机的方向是覆盖整个比赛场地,它们的图像组合在一起,形成一个单一的高分辨率视频馈送。当播放时,用户能够在统一视频流中实时平移和缩放。该系统是通过在计算机网络上分配任务来实现的,并且只处理对用户可见的数据。
{"title":"Portable Multi-megapixel Camera with Real-Time Recording and Playback","authors":"Peter Carr, R. Hartley","doi":"10.1109/DICTA.2009.62","DOIUrl":"https://doi.org/10.1109/DICTA.2009.62","url":null,"abstract":"We are interested in the problem of automatically tracking football players, subject to the constraint that only one vantage point is available. Tracking algorithms benefit from seeing the entire playing field, as one does not have to worry about objects entering and leaving the field of view. However, the image of the entire field must be of sufficient resolution to allow each of the players to be identified automatically. To achieve this desired video data, several high definition video cameras are used to record a football match from a single vantage point. The cameras are oriented to cover the entire playing field, and their images combined to create a single high-resolution video feed. The user is able to pan and zoom in real-time within the unified video stream while it is playing. The system is achieved by distributing tasks across a network of computers and only processing data that will be visible to the user.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116637626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Image Compression Based on Side-Match VQ and SOC 基于侧匹配VQ和SOC的图像压缩
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.68
S. Shie, Long-Tai Chen
A novel image compression scheme that takes advantages of side-match vector quantization (SMVQ) and search-order-coding (SOC) algorithm is proposed in this article. In the proposed scheme, the image to be compressed is firstly encoded into an index table by applying the traditional SMVQ compression technique. Then, the index table of image is further compressed based on the ordinary SOC algorithm. To improve the compression efficiency of the proposed scheme, a modified search-order-coding algorithm, called left-upper-coding (LUC), is designed. The performance comparison between the two SOC algorithms has been conducted in our computer simulation. Experimental results show that the SOC algorithm functions very well with SMVQ, and the LUC algorithm is more feasible for compressing the SMVQ indexes of image when the computational efficiency is concerned.
提出了一种利用侧匹配矢量量化(SMVQ)和搜索顺序编码(SOC)算法的图像压缩方案。在该方案中,首先采用传统的SMVQ压缩技术将待压缩图像编码成索引表。然后,在普通SOC算法的基础上进一步压缩图像索引表。为了提高该方案的压缩效率,设计了一种改进的搜索顺序编码算法,称为左上编码(LUC)。在计算机仿真中对两种SOC算法的性能进行了比较。实验结果表明,SOC算法对SMVQ的压缩效果很好,从计算效率上考虑,LUC算法对图像SMVQ指标的压缩更为可行。
{"title":"Image Compression Based on Side-Match VQ and SOC","authors":"S. Shie, Long-Tai Chen","doi":"10.1109/DICTA.2009.68","DOIUrl":"https://doi.org/10.1109/DICTA.2009.68","url":null,"abstract":"A novel image compression scheme that takes advantages of side-match vector quantization (SMVQ) and search-order-coding (SOC) algorithm is proposed in this article. In the proposed scheme, the image to be compressed is firstly encoded into an index table by applying the traditional SMVQ compression technique. Then, the index table of image is further compressed based on the ordinary SOC algorithm. To improve the compression efficiency of the proposed scheme, a modified search-order-coding algorithm, called left-upper-coding (LUC), is designed. The performance comparison between the two SOC algorithms has been conducted in our computer simulation. Experimental results show that the SOC algorithm functions very well with SMVQ, and the LUC algorithm is more feasible for compressing the SMVQ indexes of image when the computational efficiency is concerned.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123019842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Two-Layer Night-Time Vehicle Detector 一种双层夜间车辆探测器
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.33
Weihong Wang, Chunhua Shen, Jian Zhang, S. Paisitkriangkrai
We present a two-layer night time vehicle detector in this work. At the first layer, vehicle headlight detection is applied to find areas (bounding boxes) where the possible pairs of headlights locate in the image, the Haar feature based AdaBoost framework is then applied to detect the vehicle front. This approach has achieved a very promising performance for vehicle detection at night time. Our results show that the proposed algorithm can obtain a detection rate of over 90% at a very low false positive rate (1.5%). Without any code optimization, it also performs at a faster speed compared to the standard Haar feature based AdaBoost approach.
在这项工作中,我们提出了一个双层夜间车辆检测器。在第一层,应用车辆前照灯检测来查找图像中可能的前照灯对所在的区域(边界框),然后应用基于Haar特征的AdaBoost框架来检测车辆前部。该方法在夜间车辆检测中取得了很好的效果。我们的研究结果表明,该算法可以在非常低的假阳性率(1.5%)下获得超过90%的检测率。在没有任何代码优化的情况下,与基于标准Haar功能的AdaBoost方法相比,它的执行速度更快。
{"title":"A Two-Layer Night-Time Vehicle Detector","authors":"Weihong Wang, Chunhua Shen, Jian Zhang, S. Paisitkriangkrai","doi":"10.1109/DICTA.2009.33","DOIUrl":"https://doi.org/10.1109/DICTA.2009.33","url":null,"abstract":"We present a two-layer night time vehicle detector in this work. At the first layer, vehicle headlight detection is applied to find areas (bounding boxes) where the possible pairs of headlights locate in the image, the Haar feature based AdaBoost framework is then applied to detect the vehicle front. This approach has achieved a very promising performance for vehicle detection at night time. Our results show that the proposed algorithm can obtain a detection rate of over 90% at a very low false positive rate (1.5%). Without any code optimization, it also performs at a faster speed compared to the standard Haar feature based AdaBoost approach.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134496943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Automatic Mass Segmentation Based on Adaptive Pyramid and Sublevel Set Analysis 基于自适应金字塔和子水平集分析的自动质量分割
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.47
Fei Ma, M. Bajger, M. Bottema
A method based on sublevel sets is presented for refining segmentation of screening mammograms. Initial segmentation is provided by an adaptive pyramid (AP) scheme which is viewed as seeding of the final segmentation by sublevel sets. Performance is tested with and without prior anisotropic smoothing and is compared to refinement based on component merging. The combination of anisotropic smoothing, AP segmentation and sublevel refinement is found to outperform other combinations.
提出了一种基于子水平集的乳房x线筛查图像精细分割方法。初始分割由自适应金字塔(AP)方案提供,该方案被视为子层次集最终分割的种子。测试了各向异性平滑和非各向异性平滑的性能,并与基于组件合并的细化进行了比较。发现各向异性平滑、AP分割和子级细化的组合优于其他组合。
{"title":"Automatic Mass Segmentation Based on Adaptive Pyramid and Sublevel Set Analysis","authors":"Fei Ma, M. Bajger, M. Bottema","doi":"10.1109/DICTA.2009.47","DOIUrl":"https://doi.org/10.1109/DICTA.2009.47","url":null,"abstract":"A method based on sublevel sets is presented for refining segmentation of screening mammograms. Initial segmentation is provided by an adaptive pyramid (AP) scheme which is viewed as seeding of the final segmentation by sublevel sets. Performance is tested with and without prior anisotropic smoothing and is compared to refinement based on component merging. The combination of anisotropic smoothing, AP segmentation and sublevel refinement is found to outperform other combinations.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"99 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134287427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Improved 3D Thinning Algorithms for Skeleton Extraction 骨架提取的改进3D细化算法
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.13
F. She, R. H. Chen, W. Gao, P. Hodgson, L. Kong, H. Hong
In this study, we focused on developing a novel 3D Thinning algorithm to extract one-voxel wide skeleton from various 3D objects aiming at preserving the topological information. The 3D Thinning algorithm was testified on computer-generated and real 3D reconstructed image sets acquired from TEMT and compared with other existing 3D Thinning algorithms. It is found that the algorithm has conserved medial axes and simultaneously topologies very well, demonstrating many advantages over the existing technologies. They are versatile, rigorous, efficient and rotation invariant.
在本研究中,我们致力于开发一种新的3D细化算法,以保持拓扑信息为目的,从各种3D物体中提取一体素宽的骨架。在TEMT获取的计算机生成和真实三维重建图像集上对该算法进行了验证,并与其他现有的三维细化算法进行了比较。结果表明,该算法具有较好的中间轴守恒性和同步拓扑守恒性,与现有技术相比具有许多优点。它们是通用的、严格的、高效的和旋转不变的。
{"title":"Improved 3D Thinning Algorithms for Skeleton Extraction","authors":"F. She, R. H. Chen, W. Gao, P. Hodgson, L. Kong, H. Hong","doi":"10.1109/DICTA.2009.13","DOIUrl":"https://doi.org/10.1109/DICTA.2009.13","url":null,"abstract":"In this study, we focused on developing a novel 3D Thinning algorithm to extract one-voxel wide skeleton from various 3D objects aiming at preserving the topological information. The 3D Thinning algorithm was testified on computer-generated and real 3D reconstructed image sets acquired from TEMT and compared with other existing 3D Thinning algorithms. It is found that the algorithm has conserved medial axes and simultaneously topologies very well, demonstrating many advantages over the existing technologies. They are versatile, rigorous, efficient and rotation invariant.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130583105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Video Surveillance: Legally Blind? 视频监控:法律盲?
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.41
P. Kovesi
This paper shows that most surveillance cameras fall well short of providing sufficient image quality, in both spatial resolution and colour reproduction, for the reliable identification of faces. In addition, the low resolution of surveillance images means that when compression is applied the MPEG/JPEG DCT block size can be such that the spatial frequencies most important for face recognition are corrupted. Making things even worse, the compression process heavily quantizes colour information disrupting the use of pigmentation information to recognize faces. Indeed, the term 'security camera' is probably misplaced. Many surveillance cameras are legally blind, or nearly so.
本文表明,大多数监控摄像机在空间分辨率和色彩再现方面都远远不能提供足够的图像质量,从而无法可靠地识别人脸。此外,监控图像的低分辨率意味着当应用压缩时,MPEG/JPEG DCT块大小可能会导致对人脸识别最重要的空间频率被破坏。更糟糕的是,压缩过程严重量化了颜色信息,破坏了使用色素沉着信息来识别人脸。事实上,“安全摄像头”这个词可能用错了地方。许多监控摄像头在法律上是盲目的,或者几乎是盲目的。
{"title":"Video Surveillance: Legally Blind?","authors":"P. Kovesi","doi":"10.1109/DICTA.2009.41","DOIUrl":"https://doi.org/10.1109/DICTA.2009.41","url":null,"abstract":"This paper shows that most surveillance cameras fall well short of providing sufficient image quality, in both spatial resolution and colour reproduction, for the reliable identification of faces. In addition, the low resolution of surveillance images means that when compression is applied the MPEG/JPEG DCT block size can be such that the spatial frequencies most important for face recognition are corrupted. Making things even worse, the compression process heavily quantizes colour information disrupting the use of pigmentation information to recognize faces. Indeed, the term 'security camera' is probably misplaced. Many surveillance cameras are legally blind, or nearly so.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"44 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114120510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Mixed Pixel Analysis for Flood Mapping Using Extended Support Vector Machine 基于扩展支持向量机的洪水映射混合像元分析
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.55
C. Dey, X. Jia, D. Fraser, L. Wang
This paper addresses the challenges of flood mapping using multispectral images. Quantitative flood mapping is critical for flood damage assessment and management. Remote sensing images obtained from various satellite or airborne sensors provide valuable data for this application, from which the information on the extent of flood can be extracted. However the great challenge involved in the data interpretation is to achieve more reliable flood extent mapping including both the fully inundated areas and the ‘wet’ areas where trees and houses are partly covered by water. This is a typical combined pure pixel and mixed pixel problem. In this paper, an extended Support Vector Machines method for spectral unmixing developed recently has been applied to generate an integrated map showing both pure pixels (fully inundated areas) and mixed pixels (trees and houses partly covered by water). The outputs were compared with the conventional mean based linear spectral mixture model, and better performance was demonstrated with a subset of Landsat ETM+ data recorded at the Daly River Basin, NT, Australia, on 3rd March, 2008, after a flood event.
本文解决了使用多光谱图像进行洪水制图的挑战。定量绘制洪水地图对洪水灾害评估和管理至关重要。从各种卫星或机载传感器获得的遥感图像为这一应用提供了宝贵的数据,从中可以提取有关洪水范围的信息。然而,数据解释所涉及的巨大挑战是实现更可靠的洪水范围测绘,包括完全被淹没的地区和树木和房屋部分被水覆盖的“潮湿”地区。这是一个典型的纯像素和混合像素结合的问题。本文采用最近发展起来的一种扩展的支持向量机(Support Vector Machines)光谱分解方法,生成了一张显示纯像元(完全被淹没的区域)和混合像元(部分被水覆盖的树木和房屋)的综合地图。将输出结果与传统的基于平均值的线性光谱混合模型进行了比较,并在2008年3月3日洪水事件发生后记录在澳大利亚NT Daly河流域的Landsat ETM+数据子集中证明了更好的性能。
{"title":"Mixed Pixel Analysis for Flood Mapping Using Extended Support Vector Machine","authors":"C. Dey, X. Jia, D. Fraser, L. Wang","doi":"10.1109/DICTA.2009.55","DOIUrl":"https://doi.org/10.1109/DICTA.2009.55","url":null,"abstract":"This paper addresses the challenges of flood mapping using multispectral images. Quantitative flood mapping is critical for flood damage assessment and management. Remote sensing images obtained from various satellite or airborne sensors provide valuable data for this application, from which the information on the extent of flood can be extracted. However the great challenge involved in the data interpretation is to achieve more reliable flood extent mapping including both the fully inundated areas and the ‘wet’ areas where trees and houses are partly covered by water. This is a typical combined pure pixel and mixed pixel problem. In this paper, an extended Support Vector Machines method for spectral unmixing developed recently has been applied to generate an integrated map showing both pure pixels (fully inundated areas) and mixed pixels (trees and houses partly covered by water). The outputs were compared with the conventional mean based linear spectral mixture model, and better performance was demonstrated with a subset of Landsat ETM+ data recorded at the Daly River Basin, NT, Australia, on 3rd March, 2008, after a flood event.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114194679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Luminescent Microspheres Resolved from Strong Background on an Automated Time-Gated Luminescence Microscopy Workstation 在自动时间门控发光显微镜工作站上从强背景中分辨发光微球
Pub Date : 2009-12-01 DOI: 10.1109/DICTA.2009.44
Len Hamey, R. Connally, Simon Wong Too Yen, Thomas S. Lawson, J. Piper, J. Iredell
Fluorescence microscopy is a powerful tool for the rapid identification of target organisms. However, natural autofluorescence often interferes with identification. Time-gated luminescence microscopy (TGLM) uses luminescent labels with long persistence in conjunction with digital imaging to regain discriminative power. Following the excitation pulse, short-lived autofluorescence decays rapidly whereas the long-lived emission from lanthanide doped polymer beads persists for hundreds of microseconds. After a short resolving period, a gated high gain camera captures the persistent emission in the absence of short-lived fluorescence. We report on the development of a TGLM software system for automated scanning of microscope slides, and show its use to resolve luminescent microspheres within a matrix of autofluorescent algae.
荧光显微镜是快速鉴定目标生物的有力工具。然而,天然的自体荧光常常干扰识别。时间门控发光显微镜(TGLM)使用发光标签与长持久性相结合的数字成像,以恢复辨别能力。激发脉冲后,短寿命的自身荧光迅速衰减,而镧系掺杂聚合物珠的长寿命发射持续数百微秒。在短暂的分辨周期后,门控高增益相机在没有短暂荧光的情况下捕获持续发射。我们报道了用于自动扫描显微镜载玻片的TGLM软件系统的开发,并展示了它在自荧光藻类基质中解析发光微球的用途。
{"title":"Luminescent Microspheres Resolved from Strong Background on an Automated Time-Gated Luminescence Microscopy Workstation","authors":"Len Hamey, R. Connally, Simon Wong Too Yen, Thomas S. Lawson, J. Piper, J. Iredell","doi":"10.1109/DICTA.2009.44","DOIUrl":"https://doi.org/10.1109/DICTA.2009.44","url":null,"abstract":"Fluorescence microscopy is a powerful tool for the rapid identification of target organisms. However, natural autofluorescence often interferes with identification. Time-gated luminescence microscopy (TGLM) uses luminescent labels with long persistence in conjunction with digital imaging to regain discriminative power. Following the excitation pulse, short-lived autofluorescence decays rapidly whereas the long-lived emission from lanthanide doped polymer beads persists for hundreds of microseconds. After a short resolving period, a gated high gain camera captures the persistent emission in the absence of short-lived fluorescence. We report on the development of a TGLM software system for automated scanning of microscope slides, and show its use to resolve luminescent microspheres within a matrix of autofluorescent algae.","PeriodicalId":277395,"journal":{"name":"2009 Digital Image Computing: Techniques and Applications","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122375155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2009 Digital Image Computing: Techniques and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1