首页 > 最新文献

2010 International Conference on Digital Image Computing: Techniques and Applications最新文献

英文 中文
An Efficient Frequency-Domain Velocity-Filter Implementation for Dim Target Detection 一种用于弱小目标检测的高效频域速度滤波实现
H. L. Kennedy
An efficient Fourier-domain implementation of the velocity filter is presented. The Sliding Discrete Fourier Transform (SDFT) is exploited to yield a Track-Before-Detect (TBD) algorithm with a complexity that is independent of the filter integration time. As a consequence, dim targets near the noise floor of acquisition or surveillance sensors may be detected, and their states estimated, at a relatively low computational cost. The performance of the method is demonstrated using real sensor data. When processing the acquired data, the SDFT implementation is approximately 3 times faster than the equivalent Fast Fourier Transform (FFT) implementation and 16 times faster than the corresponding spatiotemporal implementation.
给出了一种有效的速度滤波器的傅里叶域实现。利用滑动离散傅里叶变换(SDFT)产生一种复杂度与滤波器积分时间无关的检测前跟踪(TBD)算法。因此,可以以相对较低的计算成本检测到捕获或监视传感器噪声底附近的微弱目标,并估计其状态。通过实际传感器数据验证了该方法的有效性。在处理获取的数据时,SDFT实现比等效的快速傅里叶变换(FFT)实现快大约3倍,比相应的时空实现快16倍。
{"title":"An Efficient Frequency-Domain Velocity-Filter Implementation for Dim Target Detection","authors":"H. L. Kennedy","doi":"10.1109/DICTA.2010.16","DOIUrl":"https://doi.org/10.1109/DICTA.2010.16","url":null,"abstract":"An efficient Fourier-domain implementation of the velocity filter is presented. The Sliding Discrete Fourier Transform (SDFT) is exploited to yield a Track-Before-Detect (TBD) algorithm with a complexity that is independent of the filter integration time. As a consequence, dim targets near the noise floor of acquisition or surveillance sensors may be detected, and their states estimated, at a relatively low computational cost. The performance of the method is demonstrated using real sensor data. When processing the acquired data, the SDFT implementation is approximately 3 times faster than the equivalent Fast Fourier Transform (FFT) implementation and 16 times faster than the corresponding spatiotemporal implementation.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116892467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Color Constancy-Based Visibility Enhancement in Low-Light Conditions 低光条件下基于颜色常数的可视性增强
Jing Yu, Q. Liao
Imaging in low-light conditions is often significantly degraded by insufficient lighting and color cast. Poor visibility becomes a major problem for many applications of computer vision. In this paper, we propose a novel color constancy-based method to enhance the visibility of low-light images. The proposed method applies an appropriate color constancy algorithm to the active set of pixels across the image. The post-processing step is also added to enhance the global contrast and lightness. Results on a wide variety of images demonstrate that the proposed method can achieve good rendition for lightness, contrast and color fidelity without graying-out artifacts or halo artifacts intrinsically present in Retinex approaches.
在弱光条件下成像通常会因光照不足和偏色而显著降低。可视性差已成为计算机视觉许多应用的主要问题。本文提出了一种基于颜色恒常性的增强弱光图像可见性的方法。该方法对图像上的活动像素集应用适当的颜色恒常性算法。还增加了后处理步骤,以增强全局对比度和亮度。在各种各样的图像上的结果表明,所提出的方法可以实现良好的亮度,对比度和色彩保真度,而没有灰度伪影或晕晕伪影固有地存在于Retinex方法中。
{"title":"Color Constancy-Based Visibility Enhancement in Low-Light Conditions","authors":"Jing Yu, Q. Liao","doi":"10.1109/DICTA.2010.81","DOIUrl":"https://doi.org/10.1109/DICTA.2010.81","url":null,"abstract":"Imaging in low-light conditions is often significantly degraded by insufficient lighting and color cast. Poor visibility becomes a major problem for many applications of computer vision. In this paper, we propose a novel color constancy-based method to enhance the visibility of low-light images. The proposed method applies an appropriate color constancy algorithm to the active set of pixels across the image. The post-processing step is also added to enhance the global contrast and lightness. Results on a wide variety of images demonstrate that the proposed method can achieve good rendition for lightness, contrast and color fidelity without graying-out artifacts or halo artifacts intrinsically present in Retinex approaches.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116349130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Empirical Study of Multi-label Classification Methods for Image Annotation and Retrieval 图像标注与检索的多标签分类方法实证研究
G. Nasierding, A. Kouzani
This paper presents an empirical study of multi-label classification methods, and gives suggestions for multi-label classification that are effective for automatic image annotation applications. The study shows that triple random ensemble multi-label classification algorithm (TREMLC) outperforms among its counterparts, especially on scene image dataset. Multi-label k-nearest neighbor (ML-kNN) and binary relevance (BR) learning algorithms perform well on Corel image dataset. Based on the overall evaluation results, examples are given to show label prediction performance for the algorithms using selected image examples. This provides an indication of the suitability of different multi-label classification methods for automatic image annotation under different problem settings.
本文对多标签分类方法进行了实证研究,并提出了多标签分类的建议,使多标签分类在图像自动标注应用中更加有效。研究表明,三随机集成多标签分类算法(TREMLC)在场景图像数据集上表现优异。多标签k近邻(ML-kNN)和二值相关(BR)学习算法在Corel图像数据集上表现良好。在综合评价结果的基础上,通过选取图像样本,给出了算法的标签预测性能。这表明在不同的问题设置下,不同的多标签分类方法对图像自动标注的适用性。
{"title":"Empirical Study of Multi-label Classification Methods for Image Annotation and Retrieval","authors":"G. Nasierding, A. Kouzani","doi":"10.1109/DICTA.2010.113","DOIUrl":"https://doi.org/10.1109/DICTA.2010.113","url":null,"abstract":"This paper presents an empirical study of multi-label classification methods, and gives suggestions for multi-label classification that are effective for automatic image annotation applications. The study shows that triple random ensemble multi-label classification algorithm (TREMLC) outperforms among its counterparts, especially on scene image dataset. Multi-label k-nearest neighbor (ML-kNN) and binary relevance (BR) learning algorithms perform well on Corel image dataset. Based on the overall evaluation results, examples are given to show label prediction performance for the algorithms using selected image examples. This provides an indication of the suitability of different multi-label classification methods for automatic image annotation under different problem settings.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124485506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Scarf: Semi-automatic Colorization and Reliable Image Fusion 丝巾:半自动上色和可靠的图像融合
Anwaar Ul Haq, I. Gondal, M. Murshed
Nighttime imagery poses significant challenges to its enhancement due to loss of color information and limitation of single sensor to capture complete visual information at night. To cope with this challenge, multiple sensors are used to capture reliable nighttime imagery which presents additional demands for reliable visual information fusion. In this paper, we present a system, Scarf, which proposes reliable image fusion using advanced feature extraction techniques and a novel semi-automatic colorization based on optimization conformal to human visual system. Subjective and objective quality evaluation proves the effectiveness of proposed system.
由于夜间图像颜色信息的丢失和单个传感器无法捕捉完整的视觉信息,对夜间图像的增强提出了重大挑战。为了应对这一挑战,需要使用多个传感器来捕获可靠的夜间图像,这对可靠的视觉信息融合提出了额外的要求。在本文中,我们提出了一个系统,围巾,提供可靠的图像融合使用先进的特征提取技术和一种新的半自动着色基于优化人类视觉系统。主观和客观的质量评价证明了系统的有效性。
{"title":"Scarf: Semi-automatic Colorization and Reliable Image Fusion","authors":"Anwaar Ul Haq, I. Gondal, M. Murshed","doi":"10.1109/DICTA.2010.80","DOIUrl":"https://doi.org/10.1109/DICTA.2010.80","url":null,"abstract":"Nighttime imagery poses significant challenges to its enhancement due to loss of color information and limitation of single sensor to capture complete visual information at night. To cope with this challenge, multiple sensors are used to capture reliable nighttime imagery which presents additional demands for reliable visual information fusion. In this paper, we present a system, Scarf, which proposes reliable image fusion using advanced feature extraction techniques and a novel semi-automatic colorization based on optimization conformal to human visual system. Subjective and objective quality evaluation proves the effectiveness of proposed system.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Segmentation of Dense 2D Bacilli Populations 密集二维芽孢杆菌种群的分割
P. Vallotton, L. Turnbull, C. Whitchurch, Lisa Mililli
Bacteria outnumber all other known organisms by far so there is considerable interest in characterizing them in detail and in measuring their diversity, evolution, and dynamics. Here, we present a system capable of identifying rod-like bacteria (bacilli) correctly in high resolution phase contrast images. We use a probabilistic model together with several purpose-designed image features in order to split bacteria at the septum consistently. Our method commits less than 1% error on test images. Our method should also be applicable to study dense 2D systems composed of elongated elements, such as some viruses, molecules, parasites (plasmodium, euglena), diatoms, and crystals.
到目前为止,细菌的数量超过了所有其他已知的生物,因此人们对详细描述它们的特征以及测量它们的多样性、进化和动态非常感兴趣。在这里,我们提出了一个系统能够识别棒状细菌(杆菌)正确在高分辨率相衬图像。我们使用概率模型和几个专门设计的图像特征,以便在隔膜上一致地分裂细菌。我们的方法在测试图像上的误差小于1%。我们的方法也应该适用于研究由细长元素组成的密集二维系统,如某些病毒、分子、寄生虫(疟原虫、绿藻)、硅藻和晶体。
{"title":"Segmentation of Dense 2D Bacilli Populations","authors":"P. Vallotton, L. Turnbull, C. Whitchurch, Lisa Mililli","doi":"10.1109/DICTA.2010.23","DOIUrl":"https://doi.org/10.1109/DICTA.2010.23","url":null,"abstract":"Bacteria outnumber all other known organisms by far so there is considerable interest in characterizing them in detail and in measuring their diversity, evolution, and dynamics. Here, we present a system capable of identifying rod-like bacteria (bacilli) correctly in high resolution phase contrast images. We use a probabilistic model together with several purpose-designed image features in order to split bacteria at the septum consistently. Our method commits less than 1% error on test images. Our method should also be applicable to study dense 2D systems composed of elongated elements, such as some viruses, molecules, parasites (plasmodium, euglena), diatoms, and crystals.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"8 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126798547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Texture-Based Estimation of Physical Characteristics of Sand Grains 基于纹理的砂粒物理特性估计
A. Newell, Lewis D. Griffin, R. Morgan, P. A. Bull
The common occurrence and transportability of quartz sand grains make them useful for forensic analysis, providing that grains can be accurately and consistently designated into prespecified types. Recent advances in the analysis of surface texture features found in scanning electron microscopy images of such grains have advanced this process. However, this requires expert knowledge that is not only time intensive, but also rare, meaning that automation is a highly attractive prospect if it were possible to achieve good levels of performance. Basic Image Feature Columns (BIF Columns), which use local symmetry type to produce a highly invariant yet distinctive encoding, have shown leading performance in standard texture recognition tasks used in computer vision. However, the system has not previously been tested on a real world problem. Here we demonstrate that the BIF Column system offers a simple yet effective solution to grain classification using surface texture. In a two class problem, where human level performance is expected to be perfect, the system classifies all but one grain from a sample of 88 correctly. In a harder task, where expert human performance is expected to be significantly less than perfect, our system achieves a correct classification rate of over 80%, with clear indications that performance can be improved if a larger dataset were available. Furthermore, very little tuning or adaptation has been necessary to achieve these results giving cause for optimism in the general applicability of this system to other texture classification problems in forensic analysis.
石英砂颗粒的普遍存在和可运输性使其对法医分析很有用,提供了颗粒可以准确和一致地指定为预先指定的类型。在扫描电子显微镜图像中发现的表面纹理特征分析的最新进展推动了这一过程。然而,这需要专业知识,这不仅是时间密集型的,而且是罕见的,这意味着自动化是一个非常有吸引力的前景,如果它有可能实现良好的性能水平。基本图像特征列(Basic Image Feature Columns, BIF Columns)利用局部对称类型产生高度不变性且独特的编码,在计算机视觉中使用的标准纹理识别任务中显示出领先的性能。然而,该系统之前还没有在现实世界的问题上进行过测试。在这里,我们证明了BIF柱系统提供了一个简单而有效的解决方案,利用表面纹理进行颗粒分类。在一个两类问题中,人类水平的表现被期望是完美的,系统从88个样本中正确地分类了除了一个之外的所有谷物。在更困难的任务中,专家的表现被期望明显低于完美,我们的系统实现了超过80%的正确分类率,明确表明如果有更大的数据集可用,性能可以提高。此外,为了实现这些结果,很少需要调整或适应,这使得我们对该系统在法医分析中的其他纹理分类问题的普遍适用性感到乐观。
{"title":"Texture-Based Estimation of Physical Characteristics of Sand Grains","authors":"A. Newell, Lewis D. Griffin, R. Morgan, P. A. Bull","doi":"10.1109/DICTA.2010.91","DOIUrl":"https://doi.org/10.1109/DICTA.2010.91","url":null,"abstract":"The common occurrence and transportability of quartz sand grains make them useful for forensic analysis, providing that grains can be accurately and consistently designated into prespecified types. Recent advances in the analysis of surface texture features found in scanning electron microscopy images of such grains have advanced this process. However, this requires expert knowledge that is not only time intensive, but also rare, meaning that automation is a highly attractive prospect if it were possible to achieve good levels of performance. Basic Image Feature Columns (BIF Columns), which use local symmetry type to produce a highly invariant yet distinctive encoding, have shown leading performance in standard texture recognition tasks used in computer vision. However, the system has not previously been tested on a real world problem. Here we demonstrate that the BIF Column system offers a simple yet effective solution to grain classification using surface texture. In a two class problem, where human level performance is expected to be perfect, the system classifies all but one grain from a sample of 88 correctly. In a harder task, where expert human performance is expected to be significantly less than perfect, our system achieves a correct classification rate of over 80%, with clear indications that performance can be improved if a larger dataset were available. Furthermore, very little tuning or adaptation has been necessary to achieve these results giving cause for optimism in the general applicability of this system to other texture classification problems in forensic analysis.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multiple Views Tracking of Maritime Targets 海上目标的多视角跟踪
Thomas Albrecht, G. West, T. Tan, Thanh Ly
This paper explores techniques for multiple views target tracking in a maritime environment using a mobile surveillance platform. We utilise an omnidirectional camera to capture full spherical video and use an Inertial Measurement Unit (IMU) to estimate the platform's ego-motion. For each target a part of the omnidirectional video is extracted, forming a corresponding set of virtual cameras. Each target is then tracked using a dynamic template matching method and particle filtering. Its predictions are then used to continuously adjust the orientations of the virtual cameras, keeping a lock on the targets. We demonstrate the performance of the application in several real-world maritime settings.
本文探讨了利用移动监视平台在海洋环境中实现多视点目标跟踪的技术。我们利用全向相机捕捉全球面视频,并使用惯性测量单元(IMU)来估计平台的自我运动。对于每个目标,提取全向视频的一部分,形成相应的一组虚拟摄像机。然后使用动态模板匹配方法和粒子滤波对每个目标进行跟踪。然后,它的预测被用来不断调整虚拟摄像机的方向,保持对目标的锁定。我们在几个真实的海事环境中演示了该应用程序的性能。
{"title":"Multiple Views Tracking of Maritime Targets","authors":"Thomas Albrecht, G. West, T. Tan, Thanh Ly","doi":"10.1109/DICTA.2010.59","DOIUrl":"https://doi.org/10.1109/DICTA.2010.59","url":null,"abstract":"This paper explores techniques for multiple views target tracking in a maritime environment using a mobile surveillance platform. We utilise an omnidirectional camera to capture full spherical video and use an Inertial Measurement Unit (IMU) to estimate the platform's ego-motion. For each target a part of the omnidirectional video is extracted, forming a corresponding set of virtual cameras. Each target is then tracked using a dynamic template matching method and particle filtering. Its predictions are then used to continuously adjust the orientations of the virtual cameras, keeping a lock on the targets. We demonstrate the performance of the application in several real-world maritime settings.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131106448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Expression-Invariant 3D Face Recognition Using Patched Geodesic Texture Transform 基于补丁测地线纹理变换的表情不变三维人脸识别
F. Hajati, A. Raie, Yongsheng Gao
Numerous methods have been proposed for the expression-invariant 3D face recognition, but a little attention is given to the local-based representation for the texture of the 3D images. In this paper, we propose an expression-invariant 3D face recognition approach based on the locally extracted moments of the texture when only one exemplar per person is available. We use a geodesic texture transform accompanied by Pseudo Zernike Moments to extract local feature vectors from the texture of a face. An extensive experimental investigation is conducted using publicly available BU-3DFE face databases covering face recognition under expression variations. The performance of the proposed method is compared with the performance of two benchmark approaches. The encouraging experimental results demonstrate that the proposed method can be used for 3D face recognition in single model databases.
目前已经提出了许多基于表情不变的三维人脸识别方法,但对三维图像纹理的局部表示关注较少。在本文中,我们提出了一种基于局部提取纹理矩的三维人脸识别方法,该方法在每个人只有一个样本的情况下是基于表情不变的。我们使用带有伪泽尼克矩的测地线纹理变换从人脸纹理中提取局部特征向量。利用公开的BU-3DFE人脸数据库进行了广泛的实验研究,涵盖了表情变化下的人脸识别。将该方法的性能与两种基准方法的性能进行了比较。实验结果表明,该方法可用于单模型数据库的三维人脸识别。
{"title":"Expression-Invariant 3D Face Recognition Using Patched Geodesic Texture Transform","authors":"F. Hajati, A. Raie, Yongsheng Gao","doi":"10.1109/DICTA.2010.52","DOIUrl":"https://doi.org/10.1109/DICTA.2010.52","url":null,"abstract":"Numerous methods have been proposed for the expression-invariant 3D face recognition, but a little attention is given to the local-based representation for the texture of the 3D images. In this paper, we propose an expression-invariant 3D face recognition approach based on the locally extracted moments of the texture when only one exemplar per person is available. We use a geodesic texture transform accompanied by Pseudo Zernike Moments to extract local feature vectors from the texture of a face. An extensive experimental investigation is conducted using publicly available BU-3DFE face databases covering face recognition under expression variations. The performance of the proposed method is compared with the performance of two benchmark approaches. The encouraging experimental results demonstrate that the proposed method can be used for 3D face recognition in single model databases.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"31 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132757367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Enhancement to Closed-Form Method for Natural Image Matting 自然图像抠图中封闭形式方法的改进
Jun Zhu, Dengsheng Zhang, Guojun Lu
Natural image matting is a task to estimate fractional opacity of foreground layer from an image. Many matting methods have been proposed, and most of them are trimap-based. Among these methods, closed-form matting offers both trimap-based and scribble-based matting. However, the closed-form method causes significant errors at background-hole regions due to over-smoothing. In this paper, we identify the source of the problem and propose our solution to enhance the closed-form method. Experiments show that our enhanced method can improve the accuracy for trimap-based images and obtain similar result to the closed-form method for scribble-based matting.
自然图像抠图是一项估计图像前景层不透明度分数的任务。已经提出了许多消光方法,其中大多数是基于trimap的。在这些方法中,封闭形式的抠图提供了基于trimap和基于潦草的抠图。然而,由于过度平滑,封闭形式的方法在背景孔区域会产生很大的误差。在本文中,我们找出了问题的根源,并提出了改进封闭形式方法的方法。实验表明,改进后的方法可以提高基于trimap的图像的裁剪精度,并获得与基于潦草的封闭格式方法相似的结果。
{"title":"An Enhancement to Closed-Form Method for Natural Image Matting","authors":"Jun Zhu, Dengsheng Zhang, Guojun Lu","doi":"10.1109/DICTA.2010.110","DOIUrl":"https://doi.org/10.1109/DICTA.2010.110","url":null,"abstract":"Natural image matting is a task to estimate fractional opacity of foreground layer from an image. Many matting methods have been proposed, and most of them are trimap-based. Among these methods, closed-form matting offers both trimap-based and scribble-based matting. However, the closed-form method causes significant errors at background-hole regions due to over-smoothing. In this paper, we identify the source of the problem and propose our solution to enhance the closed-form method. Experiments show that our enhanced method can improve the accuracy for trimap-based images and obtain similar result to the closed-form method for scribble-based matting.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133806926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Linear Feature Detection on GPUs gpu的线性特征检测
L. Domanski, Changming Sun, Raquibul Hassan, P. Vallotton, Dadong Wang
The acceleration of an existing linear feature detection algorithm for 2D images using GPUs is discussed. The two most time consuming components of this process are implemented on the GPU, namely, linear feature detection using dual-peak directional non-maximum suppression, and a gap filling process that joins disconnected feature masks to rectify false negatives. Multiple steps or image filters in each component are combined into a single GPU kernel to minimise data transfers to off-chip GPU RAM, and issues relating to on-chip memory utilisation, caching, and memory coalescing are considered. The presented algorithm is useful for applications needing to analyse complex linear structures, and examples are given for dense neurite images from the biotech domain.
讨论了现有的基于gpu的二维图像线性特征检测算法的加速问题。该过程中两个最耗时的组件是在GPU上实现的,即使用双峰定向非最大抑制的线性特征检测,以及连接断开的特征掩码以纠正假阴性的间隙填充过程。每个组件中的多个步骤或图像过滤器组合成单个GPU内核,以最大限度地减少数据传输到片外GPU RAM,并考虑与片上内存利用率,缓存和内存合并相关的问题。该算法适用于需要分析复杂线性结构的应用,并给出了生物技术领域密集神经突图像的示例。
{"title":"Linear Feature Detection on GPUs","authors":"L. Domanski, Changming Sun, Raquibul Hassan, P. Vallotton, Dadong Wang","doi":"10.1109/DICTA.2010.112","DOIUrl":"https://doi.org/10.1109/DICTA.2010.112","url":null,"abstract":"The acceleration of an existing linear feature detection algorithm for 2D images using GPUs is discussed. The two most time consuming components of this process are implemented on the GPU, namely, linear feature detection using dual-peak directional non-maximum suppression, and a gap filling process that joins disconnected feature masks to rectify false negatives. Multiple steps or image filters in each component are combined into a single GPU kernel to minimise data transfers to off-chip GPU RAM, and issues relating to on-chip memory utilisation, caching, and memory coalescing are considered. The presented algorithm is useful for applications needing to analyse complex linear structures, and examples are given for dense neurite images from the biotech domain.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123688092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2010 International Conference on Digital Image Computing: Techniques and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1