首页 > 最新文献

2013 2nd IAPR Asian Conference on Pattern Recognition最新文献

英文 中文
Vehicle Detection in Satellite Images by Parallel Deep Convolutional Neural Networks 基于并行深度卷积神经网络的卫星图像车辆检测
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.33
Xueyun Chen, Shiming Xiang, Cheng-Lin Liu, Chunhong Pan
Deep convolutional Neural Networks (DNN) is the state-of-the-art machine learning method. It has been used in many recognition tasks including handwritten digits, Chinese words and traffic signs, etc. However, training and test DNN are time-consuming tasks. In practical vehicle detection application, both speed and accuracy are required. So increasing the speeds of DNN while keeping its high accuracy has significant meaning for many recognition and detection applications. We introduce parallel branches into the DNN. The maps of the layers of DNN are divided into several parallel branches, each branch has the same number of maps. There are not direct connections between different branches. Our parallel DNN (PNN) keeps the same structure and dimensions of the DNN, reducing the total number of connections between maps. The more number of branches we divide, the more swift the speed of the PNN is, the conventional DNN becomes a special form of PNN which has only one branch. Experiments on large vehicle database showed that the detection accuracy of PNN dropped slightly with the speed increasing. Even the fastest PNN (10 times faster than DNN), whose branch has only two maps, fully outperformed the traditional methods based on features (such as HOG, LBP). In fact, PNN provides a good solution way for compromising the speed and accuracy requirements in many applications.
深度卷积神经网络(DNN)是最先进的机器学习方法。它已被用于许多识别任务,包括手写数字、中文单词和交通标志等。然而,训练和测试深度神经网络是一项耗时的任务。在实际的车辆检测应用中,对速度和精度都有要求。因此,提高深度神经网络的速度,同时保持其高精度,对许多识别和检测应用具有重要意义。我们在DNN中引入平行分支。DNN各层的地图被分成几个平行的分支,每个分支有相同数量的地图。不同的分支之间没有直接的联系。我们的并行深度神经网络(PNN)保持了DNN的相同结构和维度,减少了映射之间的连接总数。分割的分支数越多,PNN的速度越快,传统深度神经网络成为只有一个分支的PNN的一种特殊形式。在大型车辆数据库上的实验表明,随着速度的增加,PNN的检测精度略有下降。即使是最快的PNN(比DNN快10倍),其分支只有两个地图,也完全优于基于特征的传统方法(如HOG, LBP)。事实上,PNN为许多应用中对速度和精度要求的妥协提供了一种很好的解决方案。
{"title":"Vehicle Detection in Satellite Images by Parallel Deep Convolutional Neural Networks","authors":"Xueyun Chen, Shiming Xiang, Cheng-Lin Liu, Chunhong Pan","doi":"10.1109/ACPR.2013.33","DOIUrl":"https://doi.org/10.1109/ACPR.2013.33","url":null,"abstract":"Deep convolutional Neural Networks (DNN) is the state-of-the-art machine learning method. It has been used in many recognition tasks including handwritten digits, Chinese words and traffic signs, etc. However, training and test DNN are time-consuming tasks. In practical vehicle detection application, both speed and accuracy are required. So increasing the speeds of DNN while keeping its high accuracy has significant meaning for many recognition and detection applications. We introduce parallel branches into the DNN. The maps of the layers of DNN are divided into several parallel branches, each branch has the same number of maps. There are not direct connections between different branches. Our parallel DNN (PNN) keeps the same structure and dimensions of the DNN, reducing the total number of connections between maps. The more number of branches we divide, the more swift the speed of the PNN is, the conventional DNN becomes a special form of PNN which has only one branch. Experiments on large vehicle database showed that the detection accuracy of PNN dropped slightly with the speed increasing. Even the fastest PNN (10 times faster than DNN), whose branch has only two maps, fully outperformed the traditional methods based on features (such as HOG, LBP). In fact, PNN provides a good solution way for compromising the speed and accuracy requirements in many applications.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122751171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Melanoma Classification Using Dermoscopy Imaging and Ensemble Learning 使用皮肤镜成像和集成学习进行黑色素瘤分类
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.102
G. Schaefer, B. Krawczyk, M. E. Celebi, H. Iyatomi
Malignant melanoma, the deadliest form of skin cancer, is one of the most rapidly increasing cancers in the world. Early diagnosis is crucial, since if detected early, it can be cured through a simple excision. In this paper, we present an effective approach to melanoma classification from dermoscopic images of skin lesions. First, we perform automatic border detection to delineate the lesion from the background skin. Shape features are then extracted from this border, while colour and texture features are obtained based on a division of the image into clinically significant regions. The derived features are then used in a pattern classification stage for which we employ a dedicated ensemble learning approach to address the class imbalance in the training data. Our classifier committee trains individual classifiers on balanced subspaces, removes redundant predictors based on a diversity measure and combines the remaining classifiers using a neural network fuser. Experimental results on a large dataset of dermoscopic skin lesion images show our approach to work well, to provide both high sensitivity and specificity, and the use of our classifier ensemble to lead to statistically better recognition performance.
恶性黑色素瘤是最致命的皮肤癌,也是世界上增长最快的癌症之一。早期诊断是至关重要的,因为如果早期发现,它可以通过简单的切除治愈。在本文中,我们提出了一种有效的方法来分类黑色素瘤从皮肤镜图像的皮肤病变。首先,我们进行自动边界检测,从背景皮肤中勾画病灶。然后从该边界提取形状特征,同时根据将图像划分为临床重要区域来获得颜色和纹理特征。然后在模式分类阶段使用衍生的特征,为此我们采用专用的集成学习方法来解决训练数据中的类不平衡。我们的分类器委员会在平衡子空间上训练单个分类器,基于多样性度量去除冗余预测器,并使用神经网络融合器组合剩余的分类器。在大型皮肤镜皮肤病变图像数据集上的实验结果表明,我们的方法工作良好,提供了高灵敏度和特异性,并且使用我们的分类器集合可以获得统计上更好的识别性能。
{"title":"Melanoma Classification Using Dermoscopy Imaging and Ensemble Learning","authors":"G. Schaefer, B. Krawczyk, M. E. Celebi, H. Iyatomi","doi":"10.1109/ACPR.2013.102","DOIUrl":"https://doi.org/10.1109/ACPR.2013.102","url":null,"abstract":"Malignant melanoma, the deadliest form of skin cancer, is one of the most rapidly increasing cancers in the world. Early diagnosis is crucial, since if detected early, it can be cured through a simple excision. In this paper, we present an effective approach to melanoma classification from dermoscopic images of skin lesions. First, we perform automatic border detection to delineate the lesion from the background skin. Shape features are then extracted from this border, while colour and texture features are obtained based on a division of the image into clinically significant regions. The derived features are then used in a pattern classification stage for which we employ a dedicated ensemble learning approach to address the class imbalance in the training data. Our classifier committee trains individual classifiers on balanced subspaces, removes redundant predictors based on a diversity measure and combines the remaining classifiers using a neural network fuser. Experimental results on a large dataset of dermoscopic skin lesion images show our approach to work well, to provide both high sensitivity and specificity, and the use of our classifier ensemble to lead to statistically better recognition performance.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122930802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deformed and Touched Characters Recognition 变形和触摸字符识别
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.193
Tadashi Hyuga, H. Wada, Tomoyoshi Aizawa, Yoshihisa Ijiri, M. Kawade
In this demonstration, we will show our Optical Character Recognition(OCR) technique. Character deformation and touching problems often occur during high-speed printing process in the machine vision industry. As a result, it is difficult for OCR system to segment and recognize characters properly. To solve these problems, we propose a novel OCR technique which is robust against deformation and touching. It splits regions of characters simply and excessively, recognizes all segments and merged regions, and obtains optimal segments using graph theory.
在这个演示中,我们将展示我们的光学字符识别(OCR)技术。在机器视觉行业中,高速印刷过程中经常出现字符变形和触摸问题。因此,OCR系统难以对字符进行正确的分割和识别。为了解决这些问题,我们提出了一种新的抗变形和抗触摸的OCR技术。该算法对字符区域进行简单而过度的分割,对所有分割区域和合并区域进行识别,并利用图论得到最优分割区域。
{"title":"Deformed and Touched Characters Recognition","authors":"Tadashi Hyuga, H. Wada, Tomoyoshi Aizawa, Yoshihisa Ijiri, M. Kawade","doi":"10.1109/ACPR.2013.193","DOIUrl":"https://doi.org/10.1109/ACPR.2013.193","url":null,"abstract":"In this demonstration, we will show our Optical Character Recognition(OCR) technique. Character deformation and touching problems often occur during high-speed printing process in the machine vision industry. As a result, it is difficult for OCR system to segment and recognize characters properly. To solve these problems, we propose a novel OCR technique which is robust against deformation and touching. It splits regions of characters simply and excessively, recognizes all segments and merged regions, and obtains optimal segments using graph theory.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127876425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-layered Background Modeling for Complex Environment Surveillance 复杂环境监测的多层背景建模
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.83
S. Yoshinaga, Atsushi Shimada, H. Nagahara, R. Taniguchi, Kouichiro Kajitani, Takeshi Naito
Many background models have been proposed to adapt to "illumination changes" and "dynamic changes" such as swaying motion of tree branches. However, the problem of background maintenance in complex environment, where foreground objects pass in front of stationary objects which cease moving, is still far from being completely solved. To address this problem, we propose a framework for multi-layered background modeling, in which we conserve the background models for stationary objects hierarchically in addition to the one for the initial background. To realize this framework, we also propose a spatio-temporal background model based on the similarity in the intensity changes among pixels. Experimental results on complex scenes, such as a bus stop and an intersection, show that our proposed method can adapt to both appearances and disappearances of stationary objects thanks to the multi-layered background modeling framework.
为了适应“光照变化”和“动态变化”(如树枝的摇摆运动),人们提出了许多背景模型。然而,在前景物体经过静止物体而静止物体停止运动的复杂环境下,背景维护问题还远未完全解决。为了解决这一问题,我们提出了一种多层背景建模框架,在该框架中,除了初始背景模型外,我们还分层保留静止物体的背景模型。为了实现这一框架,我们还提出了基于像素间强度变化相似性的时空背景模型。在公交车站和十字路口等复杂场景下的实验结果表明,基于多层背景建模框架的方法能够适应静止物体的出现和消失。
{"title":"Multi-layered Background Modeling for Complex Environment Surveillance","authors":"S. Yoshinaga, Atsushi Shimada, H. Nagahara, R. Taniguchi, Kouichiro Kajitani, Takeshi Naito","doi":"10.1109/ACPR.2013.83","DOIUrl":"https://doi.org/10.1109/ACPR.2013.83","url":null,"abstract":"Many background models have been proposed to adapt to \"illumination changes\" and \"dynamic changes\" such as swaying motion of tree branches. However, the problem of background maintenance in complex environment, where foreground objects pass in front of stationary objects which cease moving, is still far from being completely solved. To address this problem, we propose a framework for multi-layered background modeling, in which we conserve the background models for stationary objects hierarchically in addition to the one for the initial background. To realize this framework, we also propose a spatio-temporal background model based on the similarity in the intensity changes among pixels. Experimental results on complex scenes, such as a bus stop and an intersection, show that our proposed method can adapt to both appearances and disappearances of stationary objects thanks to the multi-layered background modeling framework.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127767480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
New Banknote Number Recognition Algorithm Based on Support Vector Machine 基于支持向量机的纸币号码识别新算法
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.115
S. Gai, Guowei Yang, S. Zhang, M. Wan
Detecting the banknote serial number is an important task in business transaction. In this paper, we propose a new banknote number recognition method. The preprocessing of each banknote image is used to locate position of the banknote number image. Each number image is divided into non-overlapping partitions and the average gray value of each partition is used as feature vector for recognition. The optimal kernel function is obtained by the semi-definite programming (SDP). The experimental results show that the proposed method outperforms MASK, BP, HMM, Single SVM classifiers.
纸币序列号的检测是商业交易中的一项重要工作。本文提出了一种新的钞票号码识别方法。对每张钞票图像进行预处理,定位钞票号码图像的位置。将数字图像分成互不重叠的分区,并将各分区的平均灰度值作为特征向量进行识别。利用半确定规划方法得到了最优核函数。实验结果表明,该方法优于MASK、BP、HMM和单个SVM分类器。
{"title":"New Banknote Number Recognition Algorithm Based on Support Vector Machine","authors":"S. Gai, Guowei Yang, S. Zhang, M. Wan","doi":"10.1109/ACPR.2013.115","DOIUrl":"https://doi.org/10.1109/ACPR.2013.115","url":null,"abstract":"Detecting the banknote serial number is an important task in business transaction. In this paper, we propose a new banknote number recognition method. The preprocessing of each banknote image is used to locate position of the banknote number image. Each number image is divided into non-overlapping partitions and the average gray value of each partition is used as feature vector for recognition. The optimal kernel function is obtained by the semi-definite programming (SDP). The experimental results show that the proposed method outperforms MASK, BP, HMM, Single SVM classifiers.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"170 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131465893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Consensus Region Merging for Image Segmentation 图像分割的一致区域合并
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.142
F. Nielsen, R. Nock
Image segmentation is a fundamental task of image processing that consists in partitioning the image by grouping pixels into homogeneous regions. We propose a novel segmentation algorithm that consists in combining many runs of a simple and fast randomized segmentation algorithm. Our algorithm also yields a soft-edge closed contour detector. We describe the theoretical probabilistic framework and report on our implementation that experimentally corroborates that performance increases with the number of runs.
图像分割是图像处理的一项基本任务,它包括通过将像素分组到均匀区域来划分图像。我们提出了一种新的分割算法,该算法由多个简单快速的随机分割算法组合而成。我们的算法还产生了一个软边缘闭合轮廓检测器。我们描述了理论概率框架,并报告了我们的实现,实验证实了性能随着运行次数的增加而增加。
{"title":"Consensus Region Merging for Image Segmentation","authors":"F. Nielsen, R. Nock","doi":"10.1109/ACPR.2013.142","DOIUrl":"https://doi.org/10.1109/ACPR.2013.142","url":null,"abstract":"Image segmentation is a fundamental task of image processing that consists in partitioning the image by grouping pixels into homogeneous regions. We propose a novel segmentation algorithm that consists in combining many runs of a simple and fast randomized segmentation algorithm. Our algorithm also yields a soft-edge closed contour detector. We describe the theoretical probabilistic framework and report on our implementation that experimentally corroborates that performance increases with the number of runs.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115378011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Fast Alternative for Template Matching: An ObjectCode Method 模板匹配的快速替代方法:ObjectCode方法
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.80
Yiping Shen, Shuxiao Li, Chenxu Wang, Hongxing Chang
In this paper an ObjectCode method is presented for fast template matching. Firstly, Local Binary Patterns are adopted to get the patterns for the template and the search image, respectively. Then, a selection strategy is proposed to choose a small portion of pixels (on average 1.87%) from the template, whose patterns are concatenated to form an ObjectCode representing the characteristics of the interested target region. For the candidates in the search image, we get the candidate codes using the selected pixels from the template accordingly. Finally, the similarities between the ObjectCode and the candidate codes are calculated efficiently by a new distance measure based on Hamming distance. Extensive experiments demonstrated that our method is 13.7 times faster than FFT-based template matching and 1.1 times faster than Two-stage Partial Correlation Elimination (TPCE) with similar performances, thus is a fast alternative for current template matching methods.
本文提出了一种用于快速模板匹配的ObjectCode方法。首先,采用局部二值模式分别得到模板和搜索图像的模式;然后,提出了一种选择策略,从模板中选择一小部分像素(平均1.87%),将这些像素的模式拼接成一个ObjectCode,该ObjectCode表示感兴趣的目标区域的特征。对于搜索图像中的候选图像,我们使用从模板中选择的像素相应地获得候选代码。最后,通过一种基于汉明距离的距离度量,有效地计算出目标码与候选码之间的相似度。大量实验表明,该方法比基于fft的模板匹配快13.7倍,比两阶段偏相关消除(TPCE)方法快1.1倍,性能相似,是当前模板匹配方法的快速替代方案。
{"title":"A Fast Alternative for Template Matching: An ObjectCode Method","authors":"Yiping Shen, Shuxiao Li, Chenxu Wang, Hongxing Chang","doi":"10.1109/ACPR.2013.80","DOIUrl":"https://doi.org/10.1109/ACPR.2013.80","url":null,"abstract":"In this paper an ObjectCode method is presented for fast template matching. Firstly, Local Binary Patterns are adopted to get the patterns for the template and the search image, respectively. Then, a selection strategy is proposed to choose a small portion of pixels (on average 1.87%) from the template, whose patterns are concatenated to form an ObjectCode representing the characteristics of the interested target region. For the candidates in the search image, we get the candidate codes using the selected pixels from the template accordingly. Finally, the similarities between the ObjectCode and the candidate codes are calculated efficiently by a new distance measure based on Hamming distance. Extensive experiments demonstrated that our method is 13.7 times faster than FFT-based template matching and 1.1 times faster than Two-stage Partial Correlation Elimination (TPCE) with similar performances, thus is a fast alternative for current template matching methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"85 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115734955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HEp-2 Cell Classification Using Multi-dimensional Local Binary Patterns and Ensemble Classification 基于多维局部二值模式和集成分类的HEp-2细胞分类
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.175
G. Schaefer, N. Doshi, B. Krawczyk
Indirect immunofluorescence imaging is a fundamental technique for detecting antinuclear antibodies in HEp-2 cells. This is particularly useful for the diagnosis of autoimmune diseases and other important pathological conditions involving the immune system. HEp-2 cells can be categorised into six groups: homogeneous, fine speckled, coarse speckled, nucleolar, cytoplasmic, and Centro mere cells, which give indications on different autoimmune diseases. This categorisation is typically performed by manual evaluation which is time consuming and subjective. In this paper, we present a method for automatic classification of HEp-2 cells using local binary pattern (LBP) based texture descriptors and ensemble classification. In our approach, we utilise multi-dimensional LBP (MD-LBP) histograms, which record multi-scale texture information while maintaining the relationships between the scales. Our dedicated ensemble classification approach is based on a set of heterogeneous base classifiers obtained through application of different feature selection algorithms, a diversity based pruning stage and a neural network classifier fuser. We test our algorithm on the ICPR 2012 HEp-2 contest benchmark dataset, and demonstrate it to outperform all algorithms that were entered in the competition as well as to exceed the performance of a human expert.
间接免疫荧光成像是检测HEp-2细胞抗核抗体的基本技术。这对于自身免疫性疾病和其他涉及免疫系统的重要病理状况的诊断特别有用。HEp-2细胞可分为6类:均质细胞、细斑细胞、粗斑细胞、核仁细胞、细胞质细胞和中心细胞,它们可用于不同的自身免疫性疾病。这种分类通常是通过人工评估来执行的,这既耗时又主观。本文提出了一种基于局部二值模式(LBP)纹理描述符和集合分类的HEp-2细胞自动分类方法。在我们的方法中,我们利用了多维LBP (MD-LBP)直方图,它记录了多尺度纹理信息,同时保持了尺度之间的关系。我们的集成分类方法是基于一组异构基分类器,这些分类器是通过应用不同的特征选择算法、基于多样性的修剪阶段和神经网络分类器融合器获得的。我们在ICPR 2012 HEp-2竞赛基准数据集上测试了我们的算法,并证明它优于所有参加竞赛的算法,并且超过了人类专家的表现。
{"title":"HEp-2 Cell Classification Using Multi-dimensional Local Binary Patterns and Ensemble Classification","authors":"G. Schaefer, N. Doshi, B. Krawczyk","doi":"10.1109/ACPR.2013.175","DOIUrl":"https://doi.org/10.1109/ACPR.2013.175","url":null,"abstract":"Indirect immunofluorescence imaging is a fundamental technique for detecting antinuclear antibodies in HEp-2 cells. This is particularly useful for the diagnosis of autoimmune diseases and other important pathological conditions involving the immune system. HEp-2 cells can be categorised into six groups: homogeneous, fine speckled, coarse speckled, nucleolar, cytoplasmic, and Centro mere cells, which give indications on different autoimmune diseases. This categorisation is typically performed by manual evaluation which is time consuming and subjective. In this paper, we present a method for automatic classification of HEp-2 cells using local binary pattern (LBP) based texture descriptors and ensemble classification. In our approach, we utilise multi-dimensional LBP (MD-LBP) histograms, which record multi-scale texture information while maintaining the relationships between the scales. Our dedicated ensemble classification approach is based on a set of heterogeneous base classifiers obtained through application of different feature selection algorithms, a diversity based pruning stage and a neural network classifier fuser. We test our algorithm on the ICPR 2012 HEp-2 contest benchmark dataset, and demonstrate it to outperform all algorithms that were entered in the competition as well as to exceed the performance of a human expert.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Group Leadership Estimation Based on Influence of Pointing Actions 基于指向行为影响的群体领导力评估
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.181
H. Habe, K. Kajiwara, Ikuhisa Mitsugami, Y. Yagi
When we act in a group with family members, friends, colleagues, each group member often play the respective role to achieve a goal that all group members have in common. This paper focuses on leadership among various kinds of roles observed in a social group and proposes a method to estimate a leader based on an interaction analysis. In order to estimate a leader in a group, we extract pointing actions of each person and measure how other people change their actions triggered by the pointing actions, i.e. how much influence the pointing actions have. When we can see the tendency that one specific person makes pointing actions and the actions have a high influence on another member, it is very likely that the person is a leader in a group. The proposed method is based on this intuition and measures the influence of pointing actions using their motion trajectories. We demonstrate that the proposed method has a potential for estimating the leadership through a comparison between the computed influence measures and subjective evaluations using some actual videos taken in a science museum.
当我们与家人、朋友、同事在一个群体中行动时,每个群体成员经常扮演各自的角色,以实现所有群体成员共同的目标。本文关注社会群体中观察到的各种角色中的领导力,并提出了一种基于互动分析的领导者评估方法。为了评估一个群体中的领导者,我们提取了每个人的指向行为,并测量了其他人如何改变他们的行为,即指向行为有多大的影响。当我们看到一个特定的人做出指向的动作,并且这个动作对另一个成员有很高的影响时,这个人很可能是一个团队的领导者。所提出的方法是基于这种直觉,并测量其运动轨迹的指向动作的影响。我们通过在科学博物馆拍摄的一些实际视频,将计算的影响度量与主观评价之间的比较,证明了所提出的方法具有估计领导力的潜力。
{"title":"Group Leadership Estimation Based on Influence of Pointing Actions","authors":"H. Habe, K. Kajiwara, Ikuhisa Mitsugami, Y. Yagi","doi":"10.1109/ACPR.2013.181","DOIUrl":"https://doi.org/10.1109/ACPR.2013.181","url":null,"abstract":"When we act in a group with family members, friends, colleagues, each group member often play the respective role to achieve a goal that all group members have in common. This paper focuses on leadership among various kinds of roles observed in a social group and proposes a method to estimate a leader based on an interaction analysis. In order to estimate a leader in a group, we extract pointing actions of each person and measure how other people change their actions triggered by the pointing actions, i.e. how much influence the pointing actions have. When we can see the tendency that one specific person makes pointing actions and the actions have a high influence on another member, it is very likely that the person is a leader in a group. The proposed method is based on this intuition and measures the influence of pointing actions using their motion trajectories. We demonstrate that the proposed method has a potential for estimating the leadership through a comparison between the computed influence measures and subjective evaluations using some actual videos taken in a science museum.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122024960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Sampling Criterion for Alpha Matting 改进的Alpha抠图采样准则
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.145
Jun Cheng, Z. Miao
Natural image matting is a useful and challenging task when processing image or editing video. It aims at solving the problem of accurately extracting the foreground object of arbitrary shape from an image by use of user-provided extra information, such as trimap. In this paper, we present a new sampling criterion based on random search for image matting. This improved random search algorithm can effectively avoid leaving good samples out and can also deal well with the relation between nearby samples and distant samples. In addition, an effective cost function is adopted to evaluate the candidate samples. The experimental results show that our method can produce high-quality mattes.
在处理图像或编辑视频时,自然图像抠图是一项有用且具有挑战性的任务。它旨在利用用户提供的额外信息(如trimap),解决从图像中精确提取任意形状的前景目标的问题。本文提出了一种新的基于随机搜索的图像抠图采样准则。这种改进的随机搜索算法可以有效地避免遗漏好的样本,并且可以很好地处理近样本和远样本之间的关系。此外,采用有效代价函数对候选样本进行评价。实验结果表明,该方法可以产生高质量的磨砂。
{"title":"Improving Sampling Criterion for Alpha Matting","authors":"Jun Cheng, Z. Miao","doi":"10.1109/ACPR.2013.145","DOIUrl":"https://doi.org/10.1109/ACPR.2013.145","url":null,"abstract":"Natural image matting is a useful and challenging task when processing image or editing video. It aims at solving the problem of accurately extracting the foreground object of arbitrary shape from an image by use of user-provided extra information, such as trimap. In this paper, we present a new sampling criterion based on random search for image matting. This improved random search algorithm can effectively avoid leaving good samples out and can also deal well with the relation between nearby samples and distant samples. In addition, an effective cost function is adopted to evaluate the candidate samples. The experimental results show that our method can produce high-quality mattes.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123384406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
2013 2nd IAPR Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1