首页 > 最新文献

2013 2nd IAPR Asian Conference on Pattern Recognition最新文献

英文 中文
Manifold Regularized Gaussian Mixture Model for Semi-supervised Clustering 半监督聚类的流形正则高斯混合模型
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.126
Haitao Gan, N. Sang, Rui Huang, X. Chen
Over the last few decades, Gaussian Mixture Model (GMM) has attracted considerable interest in data mining and pattern recognition. GMM can be used to cluster a bunch of data through estimating the parameters of multiple Gaussian components using Expectation-Maximization (EM). Recently, Locally Consistent GMM (LCGMM) has been proposed to improve the clustering performance of GMM by exploiting the local manifold structure modeled by a p nearest neighbor graph. In practice, various prior knowledge may be available which can be used to guide the clustering process and improve the performance. In this paper, we introduce a semi-supervised method, called Semi-supervised LCGMM (Semi-LCGMM), where prior knowledge is provided in the form of class labels of partial data. Semi-LCGMM incorporates prior knowledge into the maximum likelihood function of LCGMM and is solved by EM. It is worth noting that in our algorithm each class has multiple Gaussian components while in the unsupervised settings each class only has one Gaussian component. Experimental results on several datasets demonstrate the effectiveness of our algorithm.
在过去的几十年里,高斯混合模型(GMM)在数据挖掘和模式识别领域引起了极大的兴趣。GMM可以通过使用期望最大化(EM)方法估计多个高斯分量的参数来对一堆数据进行聚类。近年来,为了提高GMM的聚类性能,提出了局部一致GMM (local Consistent GMM, LCGMM)方法,利用p近邻图建模的局部流形结构。在实际应用中,可以利用各种先验知识来指导聚类过程,提高聚类性能。在本文中,我们引入了一种半监督的方法,称为半监督LCGMM (Semi-LCGMM),其中先验知识以部分数据的类标签的形式提供。半LCGMM将先验知识融入到LCGMM的极大似然函数中,并通过EM进行求解。值得注意的是,在我们的算法中,每个类都有多个高斯分量,而在无监督设置中,每个类只有一个高斯分量。在多个数据集上的实验结果证明了该算法的有效性。
{"title":"Manifold Regularized Gaussian Mixture Model for Semi-supervised Clustering","authors":"Haitao Gan, N. Sang, Rui Huang, X. Chen","doi":"10.1109/ACPR.2013.126","DOIUrl":"https://doi.org/10.1109/ACPR.2013.126","url":null,"abstract":"Over the last few decades, Gaussian Mixture Model (GMM) has attracted considerable interest in data mining and pattern recognition. GMM can be used to cluster a bunch of data through estimating the parameters of multiple Gaussian components using Expectation-Maximization (EM). Recently, Locally Consistent GMM (LCGMM) has been proposed to improve the clustering performance of GMM by exploiting the local manifold structure modeled by a p nearest neighbor graph. In practice, various prior knowledge may be available which can be used to guide the clustering process and improve the performance. In this paper, we introduce a semi-supervised method, called Semi-supervised LCGMM (Semi-LCGMM), where prior knowledge is provided in the form of class labels of partial data. Semi-LCGMM incorporates prior knowledge into the maximum likelihood function of LCGMM and is solved by EM. It is worth noting that in our algorithm each class has multiple Gaussian components while in the unsupervised settings each class only has one Gaussian component. Experimental results on several datasets demonstrate the effectiveness of our algorithm.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125865126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
OCR from Video Stream of Book Flipping 从翻书视频流的OCR
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.24
Dibyayan Chakraborty, P. Roy, J. Álvarez, U. Pal
Optical Character Recognition (OCR) in video stream of flipping pages is a challenging task because flipping at random speed cause difficulties to identify frames that contain the open page image (OPI) for better readability. Also, low resolution, blurring effect shadows add significant noise in selection of proper frames for OCR. In this work, we focus on the problem of identifying the set of optimal representative frames for the OPI from a video stream of flipping pages and then perform OCR without using any explicit hardware. To the best of our knowledge this is the first work in this area. We present an algorithm that exploits cues from edge information of flipping pages. These cues, extracted from the region of interest (ROI) of the frame, determine the flipping or open state of a page. Then a SVM classifier is trained with the edge cue information for this determination. For each OPI we obtain a set of frames. Next we choose the central frame from that set of frames as the representative frame of the corresponding OPI and perform OCR. Experiments are performed on video documents recorded using a standard resolution camera to validate the frame selection algorithm and we have obtained 88% accuracy. Also, we have obtained character recognition accuracy of 82% and word recognition accuracy of 77% from such book flipping OCR.
光学字符识别(OCR)是一项具有挑战性的任务,因为以随机速度翻转导致难以识别包含打开页面图像(OPI)的帧以获得更好的可读性。此外,低分辨率,模糊效果阴影在选择合适的OCR帧时增加了显著的噪声。在这项工作中,我们专注于从翻页视频流中识别OPI的最佳代表帧集的问题,然后在不使用任何显式硬件的情况下执行OCR。据我们所知,这是这一领域的首次研究。我们提出了一种利用翻页边缘信息线索的算法。这些线索从帧的感兴趣区域(ROI)中提取,确定页面的翻转或打开状态。然后,使用边缘线索信息训练SVM分类器进行此确定。对于每个OPI,我们获得一组帧。接下来,我们从这组帧中选择中心帧作为相应OPI的代表帧,并执行OCR。在标准分辨率摄像机录制的视频文档上进行了实验,验证了该算法的帧选择精度,达到88%。此外,我们还获得了82%的字符识别准确率和77%的单词识别准确率。
{"title":"OCR from Video Stream of Book Flipping","authors":"Dibyayan Chakraborty, P. Roy, J. Álvarez, U. Pal","doi":"10.1109/ACPR.2013.24","DOIUrl":"https://doi.org/10.1109/ACPR.2013.24","url":null,"abstract":"Optical Character Recognition (OCR) in video stream of flipping pages is a challenging task because flipping at random speed cause difficulties to identify frames that contain the open page image (OPI) for better readability. Also, low resolution, blurring effect shadows add significant noise in selection of proper frames for OCR. In this work, we focus on the problem of identifying the set of optimal representative frames for the OPI from a video stream of flipping pages and then perform OCR without using any explicit hardware. To the best of our knowledge this is the first work in this area. We present an algorithm that exploits cues from edge information of flipping pages. These cues, extracted from the region of interest (ROI) of the frame, determine the flipping or open state of a page. Then a SVM classifier is trained with the edge cue information for this determination. For each OPI we obtain a set of frames. Next we choose the central frame from that set of frames as the representative frame of the corresponding OPI and perform OCR. Experiments are performed on video documents recorded using a standard resolution camera to validate the frame selection algorithm and we have obtained 88% accuracy. Also, we have obtained character recognition accuracy of 82% and word recognition accuracy of 77% from such book flipping OCR.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124137927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Matrix-Based Hierarchical Graph Matching in Off-Line Handwritten Signatures Recognition 离线手写签名识别中基于矩阵的层次图匹配
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.164
M. Piekarczyk, M. Ogiela
In this paper, a graph-based off-line handwritten signature verification system is proposed. The system can automatically identify some global and local features which exist within different signatures of the same person. Based on these features it is possible to verify whether a signature is a forgery or not. The structural description in the form of hierarchical attributed random graph set is transformed into matrix-vector structures. These structures can be directly used as matching pattern when examined signature is analyzed. The proposed approach can be applied to off-line signature verification systems especially for kanji-like or ideogram-based structurally complex signatures.
本文提出了一种基于图形的离线手写签名验证系统。该系统能够自动识别同一个人不同签名中存在的一些全局特征和局部特征。基于这些特征,可以验证签名是否是伪造的。将分层属性随机图集形式的结构描述转化为矩阵-向量结构。这些结构可以在分析被检测签名时直接用作匹配模式。该方法适用于离线签名验证系统,尤其适用于基于汉字或表意文字的结构复杂签名验证。
{"title":"Matrix-Based Hierarchical Graph Matching in Off-Line Handwritten Signatures Recognition","authors":"M. Piekarczyk, M. Ogiela","doi":"10.1109/ACPR.2013.164","DOIUrl":"https://doi.org/10.1109/ACPR.2013.164","url":null,"abstract":"In this paper, a graph-based off-line handwritten signature verification system is proposed. The system can automatically identify some global and local features which exist within different signatures of the same person. Based on these features it is possible to verify whether a signature is a forgery or not. The structural description in the form of hierarchical attributed random graph set is transformed into matrix-vector structures. These structures can be directly used as matching pattern when examined signature is analyzed. The proposed approach can be applied to off-line signature verification systems especially for kanji-like or ideogram-based structurally complex signatures.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131423890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Mobile Robot Photographer 移动机器人摄影师
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.192
Satoru Suzuki, Y. Mitsukura
In this study, we show the mobile photographing robot which moves around entertainment facilities, and automatically takes a picture of people who desire a commemorative picture. Our robot approaches target human by detecting his/her face from the image captured from monocular camera attached on the robot. In our method, the robot behaviors are controlled by using the face detection results. In order to validate the usefulness of the proposed method, the performance of the mobile photographing robot is evaluated. From the experimental results, it was shown that the robot could approach a human and take a picture automatically without operator's intervention from human approaching to photographing.
在这项研究中,我们展示了一种移动摄影机器人,它可以在娱乐设施周围移动,并自动为想要纪念照片的人拍照。我们的机器人通过从附着在机器人上的单目摄像机捕获的图像中检测目标人的面部来接近目标人。在我们的方法中,利用人脸检测结果来控制机器人的行为。为了验证所提方法的有效性,对移动摄影机器人的性能进行了评估。实验结果表明,该机器人从接近到拍照,可以在没有操作人员干预的情况下自动接近人并拍照。
{"title":"Mobile Robot Photographer","authors":"Satoru Suzuki, Y. Mitsukura","doi":"10.1109/ACPR.2013.192","DOIUrl":"https://doi.org/10.1109/ACPR.2013.192","url":null,"abstract":"In this study, we show the mobile photographing robot which moves around entertainment facilities, and automatically takes a picture of people who desire a commemorative picture. Our robot approaches target human by detecting his/her face from the image captured from monocular camera attached on the robot. In our method, the robot behaviors are controlled by using the face detection results. In order to validate the usefulness of the proposed method, the performance of the mobile photographing robot is evaluated. From the experimental results, it was shown that the robot could approach a human and take a picture automatically without operator's intervention from human approaching to photographing.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126493203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Locality-Constrained Collaborative Sparse Approximation for Multiple-Shot Person Re-identification 多镜头人物再识别的位置约束协同稀疏逼近
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.14
Yang Wu, M. Mukunoki, M. Minoh
Person re-identification is becoming a hot research topic due to its academic importance and attractive applications in visual surveillance. This paper focuses on solving the relatively harder and more importance multiple-shot re-identification problem. Following the idea of treating it as a set-based classification problem, we propose a new model called Locality-constrained Collaborative Sparse Approximation (LCSA) which is made to be as efficient, effective and robust as possible. It improves the very recently proposed Collaborative Sparse Approximation (CSA) model by introducing two types of locality constraints to enhance the quality of the data for collaborative approximation. Extensive experiments demonstrate that LCSA is not only much better than CSA in terms of effectiveness and robustness, but also superior to other related methods.
人物再识别由于其重要的学术意义和在视觉监控中的广泛应用而成为一个研究热点。本文重点解决了较为困难和重要的多弹再识别问题。在将其视为基于集合的分类问题的基础上,我们提出了一种新的模型,称为位置约束协同稀疏逼近(LCSA),该模型尽可能地高效、有效和鲁棒。它改进了最近提出的协同稀疏逼近(CSA)模型,通过引入两种类型的局域约束来提高协同逼近的数据质量。大量的实验表明,LCSA不仅在有效性和鲁棒性上远远优于CSA,而且优于其他相关方法。
{"title":"Locality-Constrained Collaborative Sparse Approximation for Multiple-Shot Person Re-identification","authors":"Yang Wu, M. Mukunoki, M. Minoh","doi":"10.1109/ACPR.2013.14","DOIUrl":"https://doi.org/10.1109/ACPR.2013.14","url":null,"abstract":"Person re-identification is becoming a hot research topic due to its academic importance and attractive applications in visual surveillance. This paper focuses on solving the relatively harder and more importance multiple-shot re-identification problem. Following the idea of treating it as a set-based classification problem, we propose a new model called Locality-constrained Collaborative Sparse Approximation (LCSA) which is made to be as efficient, effective and robust as possible. It improves the very recently proposed Collaborative Sparse Approximation (CSA) model by introducing two types of locality constraints to enhance the quality of the data for collaborative approximation. Extensive experiments demonstrate that LCSA is not only much better than CSA in terms of effectiveness and robustness, but also superior to other related methods.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Joint Learning-Based Method for Multi-view Depth Map Super Resolution 一种基于联合学习的多视点深度图超分辨率方法
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.89
Jing Li, Zhichao Lu, Gang Zeng, Rui Gan, Long Wang, H. Zha
Depth map super resolution from multi-view depth or color images has long been explored. Multi-view stereo methods produce fine details at texture areas, and depth recordings would compensate when stereo doesn't work, e.g. at non-texture regions. However, resolution of depth maps from depth sensors are rather low. Our objective is to produce a high-res depth map by fusing different sensors from multiple views. In this paper we present a learning-based method, and infer a high-res depth map from our synthetic database by minimizing the proposed energy. As depth alone is not sufficient to describe geometry of the scene, we use additional features like normal and curvature, which are able to capture high-frequency details of the surface. Our optimization framework explores multi-view depth and color consistency, normal and curvature similarity between low-res input and the database and smoothness constraints on pixel-wise depth-color coherence as well as on patch borders. Experimental results on both synthetic and real data show that our method outperforms state-of-the-art.
从多视点深度或彩色图像中获得超分辨率的深度图已经探索了很长时间。多视图立体方法在纹理区域产生精细的细节,深度记录将在立体不起作用时进行补偿,例如在非纹理区域。然而,深度传感器的深度图分辨率很低。我们的目标是通过融合来自多个视图的不同传感器来生成高分辨率深度图。在本文中,我们提出了一种基于学习的方法,并通过最小化所提出的能量从我们的合成数据库中推断出高分辨率深度图。由于深度本身不足以描述场景的几何形状,我们使用了法线和曲率等附加特征,这些特征能够捕获表面的高频细节。我们的优化框架探索了多视图深度和颜色一致性,低分辨率输入和数据库之间的法线和曲率相似性,以及像素级深度-颜色一致性和补丁边界的平滑约束。在合成数据和实际数据上的实验结果表明,我们的方法优于目前最先进的方法。
{"title":"A Joint Learning-Based Method for Multi-view Depth Map Super Resolution","authors":"Jing Li, Zhichao Lu, Gang Zeng, Rui Gan, Long Wang, H. Zha","doi":"10.1109/ACPR.2013.89","DOIUrl":"https://doi.org/10.1109/ACPR.2013.89","url":null,"abstract":"Depth map super resolution from multi-view depth or color images has long been explored. Multi-view stereo methods produce fine details at texture areas, and depth recordings would compensate when stereo doesn't work, e.g. at non-texture regions. However, resolution of depth maps from depth sensors are rather low. Our objective is to produce a high-res depth map by fusing different sensors from multiple views. In this paper we present a learning-based method, and infer a high-res depth map from our synthetic database by minimizing the proposed energy. As depth alone is not sufficient to describe geometry of the scene, we use additional features like normal and curvature, which are able to capture high-frequency details of the surface. Our optimization framework explores multi-view depth and color consistency, normal and curvature similarity between low-res input and the database and smoothness constraints on pixel-wise depth-color coherence as well as on patch borders. Experimental results on both synthetic and real data show that our method outperforms state-of-the-art.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116616173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multifocus Image Fusion via Region Reconstruction 基于区域重建的多焦点图像融合
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.92
Jiangyong Duan, Gaofeng Meng, Shiming Xiang, Chunhong Pan
This paper presents a new method for multifocus image fusion. We formulate the problem as an optimization framework with three terms to model common visual artifacts. A reconstruction error term is used to remove the boundary seam artifacts, and an out-of-focus energy term is used to remove the ringing artifacts. Together with an additional smoothness term, these three terms define the objective function of our framework. The objective function is then minimized by an efficient greedy iteration algorithm. Our method produces high quality fusion results with few visual artifacts. Comparative results demonstrate the efficiency of our method.
提出了一种新的多焦点图像融合方法。我们将问题表述为一个优化框架,其中包含三个术语来建模常见的视觉工件。用重建误差项去除边界缝伪影,用离焦能量项去除振铃伪影。加上一个额外的平滑项,这三个项定义了我们框架的目标函数。然后用一种高效的贪婪迭代算法最小化目标函数。我们的方法产生高质量的融合结果,几乎没有视觉伪影。对比结果证明了该方法的有效性。
{"title":"Multifocus Image Fusion via Region Reconstruction","authors":"Jiangyong Duan, Gaofeng Meng, Shiming Xiang, Chunhong Pan","doi":"10.1109/ACPR.2013.92","DOIUrl":"https://doi.org/10.1109/ACPR.2013.92","url":null,"abstract":"This paper presents a new method for multifocus image fusion. We formulate the problem as an optimization framework with three terms to model common visual artifacts. A reconstruction error term is used to remove the boundary seam artifacts, and an out-of-focus energy term is used to remove the ringing artifacts. Together with an additional smoothness term, these three terms define the objective function of our framework. The objective function is then minimized by an efficient greedy iteration algorithm. Our method produces high quality fusion results with few visual artifacts. Comparative results demonstrate the efficiency of our method.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120830139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hand Gesture Segmentation in Uncontrolled Environments with Partition Matrix and a Spotting Scheme Based on Hidden Conditional Random Fields 基于分割矩阵和隐条件随机场的非受控环境下手势分割
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.153
Yi Yao, Chang-Tsun Li
Hand gesture segmentation is the task of interpreting and spotting meaningful hand gestures from continuous hand gesture sequences with non-sign transitional hand movements. In real world scenarios, challenges from the unconstrained environments can largely affect the performance of gesture segmentation. In this paper, we propose a gesture spotting scheme which can detect and monitor all eligible hand candidates in the scene, and evaluate their movement trajectories with a novel method called Partition Matrix based on Hidden Conditional Random Fields. Our experimental results demonstrate that the proposed method can spot meaningful hand gestures from continuous gesture stream with 2-4 people randomly moving around in an uncontrolled background.
手势分割是从连续的手势序列中识别有意义的手势。在现实场景中,来自无约束环境的挑战会在很大程度上影响手势分割的性能。在本文中,我们提出了一种手势识别方案,该方案可以检测和监控场景中所有符合条件的候选手势,并使用一种新的基于隐藏条件随机场的划分矩阵方法来评估它们的运动轨迹。实验结果表明,该方法可以从2-4人随机移动的连续手势流中识别出有意义的手势。
{"title":"Hand Gesture Segmentation in Uncontrolled Environments with Partition Matrix and a Spotting Scheme Based on Hidden Conditional Random Fields","authors":"Yi Yao, Chang-Tsun Li","doi":"10.1109/ACPR.2013.153","DOIUrl":"https://doi.org/10.1109/ACPR.2013.153","url":null,"abstract":"Hand gesture segmentation is the task of interpreting and spotting meaningful hand gestures from continuous hand gesture sequences with non-sign transitional hand movements. In real world scenarios, challenges from the unconstrained environments can largely affect the performance of gesture segmentation. In this paper, we propose a gesture spotting scheme which can detect and monitor all eligible hand candidates in the scene, and evaluate their movement trajectories with a novel method called Partition Matrix based on Hidden Conditional Random Fields. Our experimental results demonstrate that the proposed method can spot meaningful hand gestures from continuous gesture stream with 2-4 people randomly moving around in an uncontrolled background.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116495093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Facial Aging Simulator Based on Patch-Based Facial Texture Reconstruction 基于补丁的面部纹理重建面部老化模拟器
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.187
Akinobu Maejima, A. Mizokawa, Daiki Kuwahara, S. Morishima
We propose a facial aging simulator which can synthesize a photo realistic human aged-face image for criminal investigation. Our aging simulator is based on the patch-based facial texture reconstruction with a wrinkle aging pattern model. The advantage of our method is to synthesize an aged-face image with detailed skin texture such as spots and somberness of facial skin, as well as age-related facial wrinkles without blurs that are derived from lack of accurate pixel-wise alignments as in the linear combination model, while maintaining the identity of the original face.
提出了一种人脸老化模拟器,可以合成照片级逼真的刑侦人脸老化图像。我们的老化模拟器是基于基于补丁的面部纹理重建与皱纹老化模式模型。我们的方法的优点是,在保持原始面部身份的同时,合成了具有详细皮肤纹理(如斑点和面部皮肤暗淡)以及与年龄相关的面部皱纹的衰老面部图像,而不会像线性组合模型那样由于缺乏精确的像素对齐而产生模糊。
{"title":"Facial Aging Simulator Based on Patch-Based Facial Texture Reconstruction","authors":"Akinobu Maejima, A. Mizokawa, Daiki Kuwahara, S. Morishima","doi":"10.1109/ACPR.2013.187","DOIUrl":"https://doi.org/10.1109/ACPR.2013.187","url":null,"abstract":"We propose a facial aging simulator which can synthesize a photo realistic human aged-face image for criminal investigation. Our aging simulator is based on the patch-based facial texture reconstruction with a wrinkle aging pattern model. The advantage of our method is to synthesize an aged-face image with detailed skin texture such as spots and somberness of facial skin, as well as age-related facial wrinkles without blurs that are derived from lack of accurate pixel-wise alignments as in the linear combination model, while maintaining the identity of the original face.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129947844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Method for Exaggerative Caricature Generation from Real Face Image 一种基于真实人脸图像的夸张漫画生成方法
Pub Date : 2013-11-05 DOI: 10.1109/ACPR.2013.150
Chenglong Li, Z. Miao
The generation of caricaturing portrait with exaggeration from real face image is one of the hot spots in the field of animation creation and digital multimedia entertainment. Based on improvement of traditional ASM facial feature points, this paper makes detailed definition of the human facial features and describes them in a way of proportion. Then, we propose a method which is based on the facial features and the relationship between them to generate exaggerated portrait from real face image. This method also introduces "contrast principle" while getting the exaggerated shape of face from two main aspects-facial form exaggeration and five senses organs exaggeration. At last, this method combines MLS image deformation method which is based on feature points to generate the exaggerated portrait of face image. Our experiments show that this method is practicable and can finally get results with good effect.
从真实人脸图像中生成夸张的漫画人像是动画创作和数字多媒体娱乐领域的热点之一。在对传统ASM人脸特征点进行改进的基础上,对人脸特征点进行了详细的定义,并用比例法进行了描述。在此基础上,提出了一种基于人脸特征及其相互关系的人脸夸张图像生成方法。这种方法还引入了“对比原理”,从面部形态夸张和五官夸张两个主要方面来获得夸张的面部形状。最后,该方法结合基于特征点的MLS图像变形方法生成人脸图像的夸张人像。实验表明,该方法是可行的,并能获得较好的结果。
{"title":"A Method for Exaggerative Caricature Generation from Real Face Image","authors":"Chenglong Li, Z. Miao","doi":"10.1109/ACPR.2013.150","DOIUrl":"https://doi.org/10.1109/ACPR.2013.150","url":null,"abstract":"The generation of caricaturing portrait with exaggeration from real face image is one of the hot spots in the field of animation creation and digital multimedia entertainment. Based on improvement of traditional ASM facial feature points, this paper makes detailed definition of the human facial features and describes them in a way of proportion. Then, we propose a method which is based on the facial features and the relationship between them to generate exaggerated portrait from real face image. This method also introduces \"contrast principle\" while getting the exaggerated shape of face from two main aspects-facial form exaggeration and five senses organs exaggeration. At last, this method combines MLS image deformation method which is based on feature points to generate the exaggerated portrait of face image. Our experiments show that this method is practicable and can finally get results with good effect.","PeriodicalId":365633,"journal":{"name":"2013 2nd IAPR Asian Conference on Pattern Recognition","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125700603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2013 2nd IAPR Asian Conference on Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1