首页 > 最新文献

2013 2nd IAPR Asian Conference on Pattern Recognition最新文献

英文 中文
Melanin and Hemoglobin Identification for Skin Disease Analysis 黑色素和血红蛋白在皮肤病分析中的鉴定
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.9
Zhao Liu, J. Zerubia
This paper proposes a novel method to extract melanin and hemoglobin concentrations of human skin, using bilateral decomposition with the knowledge of a multiple layered skin model and absorbance characteristics of major chromophores. Different from state-of-art approaches, the proposed method enables to address highlight and strong shading usually existing in skin color images captured under uncontrolled environment. The derived melanin and hemoglobin indices, directly related to the pathological tissue conditions, tend to be less influenced by external imaging factors and are effective for describing pigmentation distributions. Experiments demonstrate the value of the proposed method for computer-aided diagnosis of different skin diseases. The diagnostic accuracy of melanoma increases by 9-15% for conventional RGB lesion images, compared to techniques using other color descriptors. The discrimination of inflammatory acne and hyper pigmentation reveals acne stage, which would be useful for acne severity evaluation. It is expected that this new method will prove useful for other skin disease analysis.
本文提出了一种基于多层皮肤模型和主要发色团吸光度特征的双侧分解提取人体皮肤中黑色素和血红蛋白浓度的新方法。与目前的方法不同,该方法能够解决在非受控环境下拍摄的肤色图像中通常存在的高光和强阴影问题。衍生的黑色素和血红蛋白指标与病理组织状况直接相关,受外界影像学因素影响较小,可有效描述色素分布。实验验证了该方法对不同皮肤疾病的计算机辅助诊断的价值。与使用其他颜色描述符的技术相比,传统RGB病变图像对黑色素瘤的诊断准确率提高了9-15%。炎症性痤疮和色素沉着的区分可以反映痤疮的分期,有助于痤疮严重程度的评价。预计这种新方法将对其他皮肤病的分析有用。
{"title":"Melanin and Hemoglobin Identification for Skin Disease Analysis","authors":"Zhao Liu, J. Zerubia","doi":"10.1109/ACPR.2013.9","DOIUrl":"https://doi.org/10.1109/ACPR.2013.9","url":null,"abstract":"This paper proposes a novel method to extract melanin and hemoglobin concentrations of human skin, using bilateral decomposition with the knowledge of a multiple layered skin model and absorbance characteristics of major chromophores. Different from state-of-art approaches, the proposed method enables to address highlight and strong shading usually existing in skin color images captured under uncontrolled environment. The derived melanin and hemoglobin indices, directly related to the pathological tissue conditions, tend to be less influenced by external imaging factors and are effective for describing pigmentation distributions. Experiments demonstrate the value of the proposed method for computer-aided diagnosis of different skin diseases. The diagnostic accuracy of melanoma increases by 9-15% for conventional RGB lesion images, compared to techniques using other color descriptors. The discrimination of inflammatory acne and hyper pigmentation reveals acne stage, which would be useful for acne severity evaluation. It is expected that this new method will prove useful for other skin disease analysis.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128187082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Towards Robust Gait Recognition 稳健的步态识别
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.211
Yasushi Makihara
Gait recognition is a method of biometric person authentication from his/her unconscious walking manner. Unlike the other biometrics such as DNA, fingerprint, vein, and iris, the gait can be recognized even at a distance from a camera without subjects' cooperation, and hence it is expected to be applied to many fields: criminal investigation, forensic science, and surveillance. However, the absence of the subjects' cooperation may sometimes induces large intra-subject variations of the gait due to the changes of viewpoints, walking directions, speeds, clothes, and shoes. We therefore develop methods of robust gait recognition with (1) an appearance-based view transformation model, (2) a kinematics-based speed transformation model. Moreover, CCTV footages are often stored as low frame-rate videos due to limitation of communication bandwidth and storage size, which makes it much more difficult to observe a continuous gait motion and hence significantly degrades the gait recognition performance. We therefore solve this problem with (3) a technique of periodic temporal super resolution from a low frame-rate video. We show the efficiency of the proposed methods with our constructed gait databases.
步态识别是一种根据人的无意识行走方式对人进行生物识别的方法。与DNA、指纹、静脉、虹膜等其他生物识别技术不同,即使在距离摄像机很远的地方也能识别出步态,无需受试者的配合,因此有望在刑事调查、法医学、监视等领域得到应用。然而,缺乏受试者的合作有时会引起受试者内部由于视点、行走方向、速度、衣服和鞋子的变化而产生的步态变化。因此,我们开发了鲁棒步态识别方法,包括:(1)基于外观的视图转换模型,(2)基于运动学的速度转换模型。此外,由于通信带宽和存储容量的限制,CCTV视频通常以低帧率的视频形式存储,这使得观察连续的步态运动变得更加困难,从而大大降低了步态识别的性能。因此,我们用(3)低帧率视频的周期性时间超分辨率技术来解决这个问题。我们用所构建的步态数据库证明了所提方法的有效性。
{"title":"Towards Robust Gait Recognition","authors":"Yasushi Makihara","doi":"10.1109/ACPR.2013.211","DOIUrl":"https://doi.org/10.1109/ACPR.2013.211","url":null,"abstract":"Gait recognition is a method of biometric person authentication from his/her unconscious walking manner. Unlike the other biometrics such as DNA, fingerprint, vein, and iris, the gait can be recognized even at a distance from a camera without subjects' cooperation, and hence it is expected to be applied to many fields: criminal investigation, forensic science, and surveillance. However, the absence of the subjects' cooperation may sometimes induces large intra-subject variations of the gait due to the changes of viewpoints, walking directions, speeds, clothes, and shoes. We therefore develop methods of robust gait recognition with (1) an appearance-based view transformation model, (2) a kinematics-based speed transformation model. Moreover, CCTV footages are often stored as low frame-rate videos due to limitation of communication bandwidth and storage size, which makes it much more difficult to observe a continuous gait motion and hence significantly degrades the gait recognition performance. We therefore solve this problem with (3) a technique of periodic temporal super resolution from a low frame-rate video. We show the efficiency of the proposed methods with our constructed gait databases.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129056148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Compacting Large and Loose Communities 压缩大型和松散的社区
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.137
V. Chandrashekar, Shailesh Kumar, C. V. Jawahar
Detecting compact overlapping communities in large networks is an important pattern recognition problem with applications in many domains. Most community detection algorithms trade-off between community sizes, their compactness and the scalability of finding communities. Clique Percolation Method (CPM) and Local Fitness Maximization (LFM) are two prominent and commonly used overlapping community detection methods that scale with large networks. However, significant number of communities found by them are large, noisy, and loose. In this paper, we propose a general algorithm that takes such large and loose communities generated by any method and refines them into compact communities in a systematic fashion. We define a new measure of community-ness based on eigenvector centrality, identify loose communities using this measure and propose an algorithm for partitioning such loose communities into compact communities. We refine the communities found by CPM and LFM using our method and show their effectiveness compared to the original communities in a recommendation engine task.
大型网络中紧密重叠社区的检测是一个重要的模式识别问题,在许多领域都有应用。大多数社区检测算法在社区大小、紧凑性和发现社区的可扩展性之间进行权衡。Clique peration Method (CPM)和Local Fitness Maximization (LFM)是两种重要且常用的大型网络重叠社区检测方法。然而,他们发现的大量社区规模大、嘈杂、松散。在本文中,我们提出了一种通用算法,该算法可以将任何方法生成的如此大且松散的社区以系统的方式精炼成紧凑的社区。我们定义了一种基于特征向量中心性的新的社区度量,利用该度量来识别松散社区,并提出了一种将松散社区划分为紧密社区的算法。我们使用我们的方法对CPM和LFM发现的社区进行了细化,并在推荐引擎任务中与原始社区进行了比较,展示了它们的有效性。
{"title":"Compacting Large and Loose Communities","authors":"V. Chandrashekar, Shailesh Kumar, C. V. Jawahar","doi":"10.1109/ACPR.2013.137","DOIUrl":"https://doi.org/10.1109/ACPR.2013.137","url":null,"abstract":"Detecting compact overlapping communities in large networks is an important pattern recognition problem with applications in many domains. Most community detection algorithms trade-off between community sizes, their compactness and the scalability of finding communities. Clique Percolation Method (CPM) and Local Fitness Maximization (LFM) are two prominent and commonly used overlapping community detection methods that scale with large networks. However, significant number of communities found by them are large, noisy, and loose. In this paper, we propose a general algorithm that takes such large and loose communities generated by any method and refines them into compact communities in a systematic fashion. We define a new measure of community-ness based on eigenvector centrality, identify loose communities using this measure and propose an algorithm for partitioning such loose communities into compact communities. We refine the communities found by CPM and LFM using our method and show their effectiveness compared to the original communities in a recommendation engine task.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Elements Extraction of Chinese Web News Using Prior Information of Content and Structure 基于内容和结构先验信息的中文网络新闻元素自动提取
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.52
Chengru Song, Shifeng Weng, Changshui Zhang
We propose a set of efficient processes for extracting all four elements of Chinese news web pages, namely news title, release date, news source and the main text. Our approach is based on a deep analysis of content and structure features of current Chinese news. We take content indicators as the key to recover tree structure of the main text. Additionally, we come up with the concept of Length-Distance Ratio to help improve performance. Our method rarely depends on selection of samples and has strong generalization ability regardless of training process, distinguishing itself from most existing methods. We have tested our approach on 1721 labeled Chinese news pages from 429 web sites. Results show that an 87% accuracy was achieved for news source extraction, and over 95% accuracy for other three elements.
我们提出了一套高效的中文新闻网页四要素提取流程,即新闻标题、发布日期、新闻来源和正文。我们的方法是基于对当前中国新闻内容和结构特征的深入分析。我们将内容指标作为恢复正文树状结构的关键。此外,我们提出了长距离比的概念,以帮助提高性能。我们的方法很少依赖于样本的选择,无论训练过程如何,都具有较强的泛化能力,区别于现有的大多数方法。我们在429个网站的1721个有标签的中文新闻页面上测试了我们的方法。结果表明,新闻源提取的准确率达到87%,其他三个元素的准确率超过95%。
{"title":"Automatic Elements Extraction of Chinese Web News Using Prior Information of Content and Structure","authors":"Chengru Song, Shifeng Weng, Changshui Zhang","doi":"10.1109/ACPR.2013.52","DOIUrl":"https://doi.org/10.1109/ACPR.2013.52","url":null,"abstract":"We propose a set of efficient processes for extracting all four elements of Chinese news web pages, namely news title, release date, news source and the main text. Our approach is based on a deep analysis of content and structure features of current Chinese news. We take content indicators as the key to recover tree structure of the main text. Additionally, we come up with the concept of Length-Distance Ratio to help improve performance. Our method rarely depends on selection of samples and has strong generalization ability regardless of training process, distinguishing itself from most existing methods. We have tested our approach on 1721 labeled Chinese news pages from 429 web sites. Results show that an 87% accuracy was achieved for news source extraction, and over 95% accuracy for other three elements.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129354926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure Feature Extraction for Finger-Vein Recognition 手指静脉识别的结构特征提取
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.113
Di Cao, Jinfeng Yang, Yihua Shi, Chenghua Xu
A new finger-vein image matching method based on structure feature is proposed in this paper. To describe the finger-vein structures conveniently, the vein skeletons are firstly extracted and used as the primitive information. Based on the skeletons, a curve tracing scheme depended on junction points is proposed for curve segment extraction. Next, the curve segments are encoded piecewise using a modified included angle chain, and the structure feature code of a vein network are generated sequentially. Finally, a dynamic scheme is adopted for structure feature matching. Experimental results show that the proposed method perform well in improving finger-vein matching accuracy.
提出了一种新的基于结构特征的手指静脉图像匹配方法。为了方便地描述手指静脉结构,首先提取静脉骨架作为原始信息;在骨架的基础上,提出了一种基于结点的曲线跟踪方案,用于曲线段的提取。然后,利用改进的夹角链对曲线段进行分段编码,依次生成脉网结构特征码;最后,采用动态匹配方案进行结构特征匹配。实验结果表明,该方法能较好地提高手指静脉匹配精度。
{"title":"Structure Feature Extraction for Finger-Vein Recognition","authors":"Di Cao, Jinfeng Yang, Yihua Shi, Chenghua Xu","doi":"10.1109/ACPR.2013.113","DOIUrl":"https://doi.org/10.1109/ACPR.2013.113","url":null,"abstract":"A new finger-vein image matching method based on structure feature is proposed in this paper. To describe the finger-vein structures conveniently, the vein skeletons are firstly extracted and used as the primitive information. Based on the skeletons, a curve tracing scheme depended on junction points is proposed for curve segment extraction. Next, the curve segments are encoded piecewise using a modified included angle chain, and the structure feature code of a vein network are generated sequentially. Finally, a dynamic scheme is adopted for structure feature matching. Experimental results show that the proposed method perform well in improving finger-vein matching accuracy.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131014866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Adaptive CFA Demosaicking Using Bilateral Filters for Colour Edge Preservation 使用双边滤波器进行颜色边缘保存的自适应CFA去马赛克
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.75
J. S. J. Li, S. Randhawa
Colour Filter Array (CFA) demosaicking is a process to interpolate missing colour values in order to produce a full colour image when a single image sensor is used. For smooth regions, a higher order of interpolation will usually achieve higher accuracy. However when there is a colour edge, a lower order of interpolation is desirable as it will avoid interpolation across an edge without blurring it. In this paper, a bilateral filter, which has been known to preserve sharp edges, is used to adaptively modify the weights for interpolation. When there is a colour edge, the weights will bias towards a lower order of interpolation using closer pixel values only. Otherwise, the weights will bias towards a higher interpolation for smooth regions. In order to avoid interpolation across a possible edge adjacent to the missing pixel location, four estimates using the adaptive bilateral filter are first determined for each cardinal direction. A classifier comprising a weighted median filter together with a bilateral filter is then used to produce an output of the missing colour pixel value from the four estimates. It has been shown that our proposed method has improved performance in preserving sharp colour edges with minimal colour artifacts, and it outperforms other existing demosaicking methods for most images.
彩色滤波阵列(CFA)去马赛克是一个过程,以插值缺失的颜色值,以产生一个完整的彩色图像时,使用单一的图像传感器。对于光滑区域,高阶插值通常会获得更高的精度。然而,当有颜色边缘时,较低阶的插值是可取的,因为它将避免在边缘上插值而不会模糊它。在本文中,使用双边滤波器自适应地修改插值权值,以保持已知的锐利边缘。当有颜色边缘时,权重将偏向于使用更接近像素值的低阶插值。否则,对于光滑区域,权重将偏向于更高的插值。为了避免在与缺失像素位置相邻的可能边缘上进行插值,首先使用自适应双边滤波器确定每个基本方向的四个估计。然后使用由加权中值滤波器和双边滤波器组成的分类器从四个估计中产生缺失颜色像素值的输出。实验结果表明,本文提出的去马赛克方法在保留鲜明的色彩边缘和最小的色彩伪影方面具有更好的性能,并且在大多数图像上优于其他现有的去马赛克方法。
{"title":"Adaptive CFA Demosaicking Using Bilateral Filters for Colour Edge Preservation","authors":"J. S. J. Li, S. Randhawa","doi":"10.1109/ACPR.2013.75","DOIUrl":"https://doi.org/10.1109/ACPR.2013.75","url":null,"abstract":"Colour Filter Array (CFA) demosaicking is a process to interpolate missing colour values in order to produce a full colour image when a single image sensor is used. For smooth regions, a higher order of interpolation will usually achieve higher accuracy. However when there is a colour edge, a lower order of interpolation is desirable as it will avoid interpolation across an edge without blurring it. In this paper, a bilateral filter, which has been known to preserve sharp edges, is used to adaptively modify the weights for interpolation. When there is a colour edge, the weights will bias towards a lower order of interpolation using closer pixel values only. Otherwise, the weights will bias towards a higher interpolation for smooth regions. In order to avoid interpolation across a possible edge adjacent to the missing pixel location, four estimates using the adaptive bilateral filter are first determined for each cardinal direction. A classifier comprising a weighted median filter together with a bilateral filter is then used to produce an output of the missing colour pixel value from the four estimates. It has been shown that our proposed method has improved performance in preserving sharp colour edges with minimal colour artifacts, and it outperforms other existing demosaicking methods for most images.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131043700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Saliency Detection Using Color Spatial Variance Weighted Graph Model 基于颜色空间方差加权图模型的显著性检测
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.93
Xiaoyun Yan, Yuehuang Wang, Mengmeng Song, Man Jiang
Saliency detection as a recently active research field of computer vision, has a wide range of applications, such as pattern recognition, image retrieval, adaptive compression, target detection, etc. In this paper, we propose a saliency detection method based on color spatial variance weighted graph model, which is designed rely on a background prior. First, the original image is partitioned into small patches, then we use mean-shift clustering algorithm on this patches to get sorts of clustering centers that represents the main colors of whole image. In modeling stage, all patches and the clustering centers are denoted as nodes on a specific graph model. The saliency of each patch is defined as weighted sum of weights on shortest paths from the patch to all clustering centers, each shortest path is weighted according to color spatial variance. Our saliency detection method is computational efficient and outperformed the state of art methods by higher precision and better recall rates, when we took evaluation on the popular MSRA1000 database.
显著性检测是近年来计算机视觉研究的一个活跃领域,在模式识别、图像检索、自适应压缩、目标检测等方面有着广泛的应用。本文提出了一种基于颜色空间方差加权图模型的显著性检测方法,该方法的设计依赖于背景先验。首先将原始图像分割成小块,然后在小块上使用mean-shift聚类算法得到代表整幅图像主要颜色的各种聚类中心。在建模阶段,将所有的patch和聚类中心表示为特定图模型上的节点。每个patch的显著性定义为从patch到所有聚类中心的最短路径上的权值的加权和,每个最短路径根据颜色空间方差进行加权。当我们在流行的MSRA1000数据库上进行评估时,我们的显著性检测方法计算效率高,并且以更高的精度和更好的召回率优于当前的方法。
{"title":"Saliency Detection Using Color Spatial Variance Weighted Graph Model","authors":"Xiaoyun Yan, Yuehuang Wang, Mengmeng Song, Man Jiang","doi":"10.1109/ACPR.2013.93","DOIUrl":"https://doi.org/10.1109/ACPR.2013.93","url":null,"abstract":"Saliency detection as a recently active research field of computer vision, has a wide range of applications, such as pattern recognition, image retrieval, adaptive compression, target detection, etc. In this paper, we propose a saliency detection method based on color spatial variance weighted graph model, which is designed rely on a background prior. First, the original image is partitioned into small patches, then we use mean-shift clustering algorithm on this patches to get sorts of clustering centers that represents the main colors of whole image. In modeling stage, all patches and the clustering centers are denoted as nodes on a specific graph model. The saliency of each patch is defined as weighted sum of weights on shortest paths from the patch to all clustering centers, each shortest path is weighted according to color spatial variance. Our saliency detection method is computational efficient and outperformed the state of art methods by higher precision and better recall rates, when we took evaluation on the popular MSRA1000 database.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129293559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Binary Descriptor Based Background Modeling 基于实时二进制描述符的背景建模
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.125
Wan-Chen Liu, Shu-Zhe Lin, Min-Hsiang Yang, Chun-Rong Huang
In this paper, we propose a new binary descriptor based background modeling approach which is robust to lighting changes and dynamic backgrounds in the environment. Instead of using traditional parametric models, our background models are constructed by background instances using binary descriptors computed from observed backgrounds. As shown in the experiments, our method can achieve better foreground detection results and fewer false alarms compared to the state-of-the-art methods.
本文提出了一种新的基于二元描述符的背景建模方法,该方法对光照变化和环境中的动态背景具有鲁棒性。我们的背景模型不是使用传统的参数模型,而是使用从观测背景中计算出的二进制描述符来构建背景实例。实验表明,与现有的方法相比,我们的方法可以获得更好的前景检测结果,并且可以减少误报。
{"title":"Real-Time Binary Descriptor Based Background Modeling","authors":"Wan-Chen Liu, Shu-Zhe Lin, Min-Hsiang Yang, Chun-Rong Huang","doi":"10.1109/ACPR.2013.125","DOIUrl":"https://doi.org/10.1109/ACPR.2013.125","url":null,"abstract":"In this paper, we propose a new binary descriptor based background modeling approach which is robust to lighting changes and dynamic backgrounds in the environment. Instead of using traditional parametric models, our background models are constructed by background instances using binary descriptors computed from observed backgrounds. As shown in the experiments, our method can achieve better foreground detection results and fewer false alarms compared to the state-of-the-art methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122795885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A Multi-resolution Action Recognition Algorithm Using Wavelet Domain Features 使用小波域特征的多分辨率动作识别算法
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.143
H. Imtiaz, U. Mahbub, G. Schaefer, Md Atiqur Rahman Ahad
This paper proposes a novel approach for human action recognition using multi-resolution feature extraction based on the two-dimensional discrete wavelet transform (2D-DWT). Action representations can be considered as image templates, which can be useful for understanding various actions or gestures as well as for recognition and analysis. An action recognition scheme is developed based on extracting features from the frames of a video sequence. The proposed feature selection algorithm offers the advantage of very low feature dimensionality and therefore lower computational burden. It is shown that the use of wavelet-domain features enhances the distinguish ability of different actions, resulting in a very high within-class compactness and between-class separability of the extracted features, while certain undesirable phenomena, such as camera movement and change in camera distance from the subject, are less severe in the frequency domain. Principal component analysis is performed to further reduce the dimensionality of the feature space. Extensive experimentations on a standard benchmark database confirm that the proposed approach offers not only computational savings but also a very recognition accuracy.
本文提出了一种基于二维离散小波变换(2D-DWT)的多分辨率特征提取人类动作识别新方法。动作表征可视为图像模板,可用于理解各种动作或手势以及识别和分析。基于从视频序列的帧中提取特征,开发了一种动作识别方案。所提出的特征选择算法具有特征维度极低的优势,因此计算负担较轻。研究表明,小波域特征的使用增强了对不同动作的区分能力,从而使提取的特征具有很高的类内紧凑性和类间可分性,而某些不良现象,如摄像机移动和摄像机与被摄体距离的变化,在频域中则不那么严重。主成分分析可进一步降低特征空间的维度。在标准基准数据库上进行的大量实验证实,所提出的方法不仅节省了计算量,而且识别准确率也非常高。
{"title":"A Multi-resolution Action Recognition Algorithm Using Wavelet Domain Features","authors":"H. Imtiaz, U. Mahbub, G. Schaefer, Md Atiqur Rahman Ahad","doi":"10.1109/ACPR.2013.143","DOIUrl":"https://doi.org/10.1109/ACPR.2013.143","url":null,"abstract":"This paper proposes a novel approach for human action recognition using multi-resolution feature extraction based on the two-dimensional discrete wavelet transform (2D-DWT). Action representations can be considered as image templates, which can be useful for understanding various actions or gestures as well as for recognition and analysis. An action recognition scheme is developed based on extracting features from the frames of a video sequence. The proposed feature selection algorithm offers the advantage of very low feature dimensionality and therefore lower computational burden. It is shown that the use of wavelet-domain features enhances the distinguish ability of different actions, resulting in a very high within-class compactness and between-class separability of the extracted features, while certain undesirable phenomena, such as camera movement and change in camera distance from the subject, are less severe in the frequency domain. Principal component analysis is performed to further reduce the dimensionality of the feature space. Extensive experimentations on a standard benchmark database confirm that the proposed approach offers not only computational savings but also a very recognition accuracy.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126963557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Learning Fingerprint Orientation Fields Using Continuous Restricted Boltzmann Machines 使用连续受限玻尔兹曼机学习指纹方向场
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.37
M. Sahasrabudhe, A. Namboodiri
We aim to learn local orientation field patterns in fingerprints and correct distorted field patterns in noisy fingerprint images. This is formulated as a learning problem and achieved using two continuous restricted Boltzmann machines. The learnt orientation fields are then used in conjunction with traditional Gabor based algorithms for fingerprint enhancement. Orientation fields extracted by gradient-based methods are local, and do not consider neighboring orientations. If some amount of noise is present in a fingerprint, then these methods perform poorly when enhancing the image, affecting fingerprint matching. This paper presents a method to correct the resulting noisy regions over patches of the fingerprint by training two continuous restricted Boltzmann machines. The continuous RBMs are trained with clean fingerprint images and applied to overlapping patches of the input fingerprint. Experimental results show that one can successfully restore patches of noisy fingerprint images.
我们的目标是学习指纹的局部方向场模式,并校正噪声指纹图像中的畸变场模式。这被表述为一个学习问题,并使用两个连续受限玻尔兹曼机来实现。然后将学习到的方向场与传统的基于Gabor的指纹增强算法结合使用。基于梯度的方法提取的方向场是局部的,不考虑相邻的方向。如果指纹中存在一定数量的噪声,那么这些方法在增强图像时表现不佳,影响指纹匹配。本文提出了一种通过训练两个连续受限玻尔兹曼机来校正指纹图像斑块上产生的噪声区域的方法。用干净的指纹图像训练连续rbm,并将其应用于输入指纹的重叠块上。实验结果表明,该方法可以成功地恢复带有噪声的指纹图像。
{"title":"Learning Fingerprint Orientation Fields Using Continuous Restricted Boltzmann Machines","authors":"M. Sahasrabudhe, A. Namboodiri","doi":"10.1109/ACPR.2013.37","DOIUrl":"https://doi.org/10.1109/ACPR.2013.37","url":null,"abstract":"We aim to learn local orientation field patterns in fingerprints and correct distorted field patterns in noisy fingerprint images. This is formulated as a learning problem and achieved using two continuous restricted Boltzmann machines. The learnt orientation fields are then used in conjunction with traditional Gabor based algorithms for fingerprint enhancement. Orientation fields extracted by gradient-based methods are local, and do not consider neighboring orientations. If some amount of noise is present in a fingerprint, then these methods perform poorly when enhancing the image, affecting fingerprint matching. This paper presents a method to correct the resulting noisy regions over patches of the fingerprint by training two continuous restricted Boltzmann machines. The continuous RBMs are trained with clean fingerprint images and applied to overlapping patches of the input fingerprint. Experimental results show that one can successfully restore patches of noisy fingerprint images.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2013 2nd IAPR Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1