首页 > 最新文献

2008 IEEE International Conference on Shape Modeling and Applications最新文献

英文 中文
SHape REtrieval contest 2008: Stability of watertight models 形状检索大赛2008:水密模型的稳定性
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547975
S. Biasotti, M. Attene
In this report we present the results of the Stability on Watertight Models Track. The aim of this track is to evaluate the stability of algorithms with respect to input perturbations that modify the representation of the object without changing its overall shape significantly. Examples of these perturbations include geometric noise, varying sampling patterns, small shape deformations and topological noise.
在本报告中,我们介绍了水密模型轨道稳定性的结果。本轨道的目的是评估算法的稳定性,相对于输入扰动,修改对象的表示,而不显著改变其整体形状。这些扰动的例子包括几何噪声、不同的采样模式、小形状变形和拓扑噪声。
{"title":"SHape REtrieval contest 2008: Stability of watertight models","authors":"S. Biasotti, M. Attene","doi":"10.1109/SMI.2008.4547975","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547975","url":null,"abstract":"In this report we present the results of the Stability on Watertight Models Track. The aim of this track is to evaluate the stability of algorithms with respect to input perturbations that modify the representation of the object without changing its overall shape significantly. Examples of these perturbations include geometric noise, varying sampling patterns, small shape deformations and topological noise.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130141441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
SHape REtrieval contest 2008: 3D face scans 形状检索比赛2008:3D面部扫描
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547979
F. T. Haar, M. Daoudi, R. Veltkamp
Three-Dimensional face recognition is a challenging task with a large number of proposed solutions [1, 2]. With variations in pose and expression the identification of a face scan based on 3D geometry is difficult. To improve on this task and to evaluate existing face matching methods large sets of 3D faces were constructed, such as the FRGC [3], BU-3DFE [4], and the GavabDB [5] database. When used in the same experimental way, these publicly available sets allow for a fair comparison of different methods. Usually, researchers compare the recognition rates (or identification rates) of different methods. To identify a person, its 3D face scan is enrolled as query in the database and if the most similar scan (other than the query) in the database belongs to the same person, he or she is identified correctly. For a set of queries, the recognition rate is computed as the average of zeros (no identification) and ones (correct identification). However, the recognition rate is a limited evaluation measure, because it considers merely the closest match of each query. In case you are using a database that contains two scans per expression per subject and you use each scan as query once, you are bound to find the similar scan on top of the ranked list. Such an experiment boosts the recognition rate, but gives no insight of the expression invariance of different methods. For that, an evaluation measure is required that takes a larger part of the ranked list into account. In this contest we compare different face matching methods using a large number of performance measures. As a test set we have used a processed subset of the GavabDB [5], which contains several expressions and pose variations per subject. 2 DATABASE For the retrieval contest of 3D faces we have used a subset of the GavabDB [5]. The GavabDB consists of Minolta Vi-700 laser range scans from 61 different subjects. The subjects, of which 45 are male and 16 are female, are all Caucasian. Each subject was scanned nine times for different poses and expressions, namely six neutral expression scans and three scans with an expression. The neutral scans include two different frontal scans, one scan while looking up ( +35 ), one scan while looking down ( -35 ), one scan from the right side ( +90 ), and one from the left side ( -90 ). The expression scans include one with a smile, one with a pronounced laugh, and an “arbitrary expression” freely chosen by the subject.
三维人脸识别是一项具有挑战性的任务,有大量的解决方案[1,2]。由于姿态和表情的变化,基于三维几何的人脸扫描识别是困难的。为了改进这一任务并评估现有的人脸匹配方法,构建了大型3D人脸集,如FRGC[3]、BU-3DFE[4]和GavabDB[5]数据库。当以相同的实验方式使用时,这些公开可用的集合允许对不同的方法进行公平的比较。通常,研究者比较不同方法的识别率(或识别率)。为了识别一个人,它的3D面部扫描被登记为数据库中的查询,如果数据库中最相似的扫描(除了查询)属于同一个人,那么他或她就会被正确识别。对于一组查询,识别率计算为0(未识别)和1(正确识别)的平均值。然而,识别率是一个有限的评估指标,因为它只考虑每个查询的最接近匹配。如果您使用的数据库包含每个主题的每个表情的两次扫描,并且您使用每次扫描作为查询一次,那么您一定会在排名列表的顶部找到类似的扫描。这样的实验提高了识别率,但没有深入了解不同方法的表达式不变性。为此,需要一种考虑到排名列表中更大部分的评估方法。在这场比赛中,我们使用大量的性能指标来比较不同的人脸匹配方法。作为测试集,我们使用了经过处理的GavabDB子集[5],其中包含每个受试者的几种表情和姿势变化。对于3D人脸的检索竞赛,我们使用了GavabDB的一个子集[5]。GavabDB由61个不同对象的美能达Vi-700激光距离扫描组成。研究对象均为白种人,其中男性45人,女性16人。每个被试对不同的姿势和表情进行9次扫描,即6次中性表情扫描和3次带表情扫描。中性扫描包括两个不同的正面扫描,一个向上看的扫描(+35),一个向下看的扫描(-35),一个从右侧扫描(+90),一个从左侧扫描(-90)。表情扫描包括一个微笑的表情,一个明显的笑的表情,以及一个由受试者自由选择的“任意表情”。
{"title":"SHape REtrieval contest 2008: 3D face scans","authors":"F. T. Haar, M. Daoudi, R. Veltkamp","doi":"10.1109/SMI.2008.4547979","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547979","url":null,"abstract":"Three-Dimensional face recognition is a challenging task with a large number of proposed solutions [1, 2]. With variations in pose and expression the identification of a face scan based on 3D geometry is difficult. To improve on this task and to evaluate existing face matching methods large sets of 3D faces were constructed, such as the FRGC [3], BU-3DFE [4], and the GavabDB [5] database. When used in the same experimental way, these publicly available sets allow for a fair comparison of different methods. Usually, researchers compare the recognition rates (or identification rates) of different methods. To identify a person, its 3D face scan is enrolled as query in the database and if the most similar scan (other than the query) in the database belongs to the same person, he or she is identified correctly. For a set of queries, the recognition rate is computed as the average of zeros (no identification) and ones (correct identification). However, the recognition rate is a limited evaluation measure, because it considers merely the closest match of each query. In case you are using a database that contains two scans per expression per subject and you use each scan as query once, you are bound to find the similar scan on top of the ranked list. Such an experiment boosts the recognition rate, but gives no insight of the expression invariance of different methods. For that, an evaluation measure is required that takes a larger part of the ranked list into account. In this contest we compare different face matching methods using a large number of performance measures. As a test set we have used a processed subset of the GavabDB [5], which contains several expressions and pose variations per subject. 2 DATABASE For the retrieval contest of 3D faces we have used a subset of the GavabDB [5]. The GavabDB consists of Minolta Vi-700 laser range scans from 61 different subjects. The subjects, of which 45 are male and 16 are female, are all Caucasian. Each subject was scanned nine times for different poses and expressions, namely six neutral expression scans and three scans with an expression. The neutral scans include two different frontal scans, one scan while looking up ( +35 ), one scan while looking down ( -35 ), one scan from the right side ( +90 ), and one from the left side ( -90 ). The expression scans include one with a smile, one with a pronounced laugh, and an “arbitrary expression” freely chosen by the subject.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115532504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
SHREC’08 entry: Forward neural network-based 3D model retrieval 论文题目:基于前向神经网络的三维模型检索
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547992
Yujie Liu, Xiaolan Yao, Zongmin Li
In this paper, a forward neural network (FNN) is used for 3D model retrieval. Also the descriptor based on exponentially decaying Euclidean distance transforms (EDT) is adapted to represent the feature of a 3D model. As a kind of machine learning method, FNN is trained by the PSB trained data, and then used to sort the testing data set in this contest.
本文将前向神经网络(FNN)用于三维模型检索。此外,基于指数衰减欧氏距离变换(EDT)的描述符也适用于表示三维模型的特征。FNN作为一种机器学习方法,通过PSB训练数据进行训练,然后在本次比赛中用于对测试数据集进行排序。
{"title":"SHREC’08 entry: Forward neural network-based 3D model retrieval","authors":"Yujie Liu, Xiaolan Yao, Zongmin Li","doi":"10.1109/SMI.2008.4547992","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547992","url":null,"abstract":"In this paper, a forward neural network (FNN) is used for 3D model retrieval. Also the descriptor based on exponentially decaying Euclidean distance transforms (EDT) is adapted to represent the feature of a 3D model. As a kind of machine learning method, FNN is trained by the PSB trained data, and then used to sort the testing data set in this contest.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127761974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SHREC’08 entry: Visual based 3D CAD retrieval using Fourier Mellin Transform SHREC ' 08条目:基于视觉的三维CAD检索使用傅里叶梅林变换
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547984
Xiaolan Li, A. Godil, A. I. Wagan
Fourier Mellin Transform (FMT) has been used effectively in previous work for 2D image analysis, reconstruction and retrieval. In this paper, we perform 3D shape retrieval based on FMT on the Purdue Shape Benchmark. The whole procedure includes three steps: 1) generate silhouettes along the six principle directions for each 3D model; 2) compute a collection of FMT coefficients for all the silhouettes, which are translation, scale, and rotation invariant; and 3) compute a match measure between the query coefficients collection and those in the 3D shape repositories. The main contribution of this paper is the novel approach to extract the 3D signatures by Fourier Mellin Transform. Our experimental results validate the effectiveness of our approach.
傅里叶梅林变换(FMT)在二维图像分析、重建和检索中得到了有效的应用。在本文中,我们在普渡形状基准上进行了基于FMT的三维形状检索。整个过程包括三个步骤:1)沿六个主要方向生成每个3D模型的轮廓;2)计算所有轮廓的FMT系数集合,这些轮廓是平移、缩放和旋转不变量;3)计算查询系数集合与三维形状库中的查询系数集合之间的匹配度量。本文的主要贡献是利用傅里叶梅林变换提取三维特征的新方法。实验结果验证了该方法的有效性。
{"title":"SHREC’08 entry: Visual based 3D CAD retrieval using Fourier Mellin Transform","authors":"Xiaolan Li, A. Godil, A. I. Wagan","doi":"10.1109/SMI.2008.4547984","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547984","url":null,"abstract":"Fourier Mellin Transform (FMT) has been used effectively in previous work for 2D image analysis, reconstruction and retrieval. In this paper, we perform 3D shape retrieval based on FMT on the Purdue Shape Benchmark. The whole procedure includes three steps: 1) generate silhouettes along the six principle directions for each 3D model; 2) compute a collection of FMT coefficients for all the silhouettes, which are translation, scale, and rotation invariant; and 3) compute a match measure between the query coefficients collection and those in the 3D shape repositories. The main contribution of this paper is the novel approach to extract the 3D signatures by Fourier Mellin Transform. Our experimental results validate the effectiveness of our approach.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"28 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Salient local visual features for shape-based 3D model retrieval 基于形状的三维模型检索的显著局部视觉特征
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547955
Ryutarou Ohbuchi, Kunio Osada, T. Furuya, T. Banno
In this paper, we describe a shape-based 3D model retrieval method based on multi-scale local visual features. The features are extracted from 2D range images of the model viewed from uniformly sampled locations on a view sphere. The method is appearance-based, and accepts all the models that can be rendered as a range image. For each range image, a set of 2D multi-scale local visual features is computed by using the scale invariant feature transform [22] algorithm. To reduce cost of distance computation and feature storage, a set of local features describing a 3D model is integrated into a histogram using the bag-of-features approach. Our experiments using two standard benchmarks, one for articulated shapes and the other for rigid shapes, showed that the methods achieved the performance comparable or superior to some of the most powerful 3D shape retrieval methods.
本文提出了一种基于多尺度局部视觉特征的基于形状的三维模型检索方法。这些特征是从视球上均匀采样位置观察到的模型的二维距离图像中提取的。该方法是基于外观的,并接受所有可以呈现为范围图像的模型。对于每个距离图像,使用尺度不变特征变换[22]算法计算一组二维多尺度局部视觉特征。为了减少距离计算和特征存储成本,使用特征袋方法将描述3D模型的一组局部特征集成到直方图中。我们使用两个标准基准(一个用于铰接形状,另一个用于刚性形状)进行的实验表明,该方法实现了与一些最强大的3D形状检索方法相当或优于的性能。
{"title":"Salient local visual features for shape-based 3D model retrieval","authors":"Ryutarou Ohbuchi, Kunio Osada, T. Furuya, T. Banno","doi":"10.1109/SMI.2008.4547955","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547955","url":null,"abstract":"In this paper, we describe a shape-based 3D model retrieval method based on multi-scale local visual features. The features are extracted from 2D range images of the model viewed from uniformly sampled locations on a view sphere. The method is appearance-based, and accepts all the models that can be rendered as a range image. For each range image, a set of 2D multi-scale local visual features is computed by using the scale invariant feature transform [22] algorithm. To reduce cost of distance computation and feature storage, a set of local features describing a 3D model is integrated into a histogram using the bag-of-features approach. Our experiments using two standard benchmarks, one for articulated shapes and the other for rigid shapes, showed that the methods achieved the performance comparable or superior to some of the most powerful 3D shape retrieval methods.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133029649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 300
SHape REtrieval contest 2008: Generic models 形状检索竞赛2008:通用模型
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547978
Ryutarou Ohbuchi
The first of the SHREC series of 3D model retrieval contests, SHREC 2006 [5] organized by Prof. Veltkamp et al. has made an impact in the way researchers compare performances of their 3D model retrieval methods. The task was to retrieve polygon soup models found in the Princeton Shape Benchmark database [5] having diverse shape and semantics. While many researchers used the SHREC 2006 as their benchmark, there has been no “official” contest since 2006 that used the same SHREC 2006 format but with up-to-date algorithms and methods. The SHREC 2007 added new tracks, e.g., for 3D face models, watertight models, protein models, CAD models, partial matching, and relevance feedback. However, the format of SHREC 2006 was missing. This SHREC 2008 Generic Models Track (GMT) tries to repeat the SHREC 2006 so that we can compare state-of-the-art methods for polygon soup models by using a stable benchmark dataset and ground truth classifications. A change from the SHREC 2006 to the SHREC 2008 GMT is the acknowledgement of learning based algorithms for 3D model retrieval. The SHREC 2008 GMT has two entry categories depending on if supervised learning is used or not. We wanted to encourage various forms of learning algorithms, as we believe learning algorithms are as essential as features themselves for effective 3D model retrieval. At the same time, we do not want to discourage methods without supervised learning. So we created two sub-tracks, one for unsupervised methods and the other for supervised methods. To test the behavior of supervised method for the queries having “unseen” ground truth classifications, we added a new set of queries, in addition to the original set of queries used in the SHREC 2006.
由Veltkamp教授等人组织的SHREC 2006[5]是SHREC系列3D模型检索比赛的第一场比赛,对研究人员比较其3D模型检索方法的性能产生了影响。任务是检索普林斯顿形状基准数据库[5]中具有不同形状和语义的多边形汤模型。虽然许多研究人员将《SHREC 2006》作为基准,但自2006年以来,就没有“官方”竞赛使用与《SHREC 2006》相同的格式,但采用了最新的算法和方法。SHREC 2007增加了新的轨道,例如3D面部模型、水密模型、蛋白质模型、CAD模型、部分匹配和相关反馈。然而,SHREC 2006的格式却缺失了。这个SHREC 2008通用模型跟踪(GMT)试图重复SHREC 2006,这样我们就可以通过使用稳定的基准数据集和ground truth分类来比较最先进的多边形汤模型方法。从SHREC 2006到SHREC 2008 GMT的一个变化是承认基于学习的3D模型检索算法。根据是否使用监督学习,SHREC 2008 GMT有两个条目类别。我们希望鼓励各种形式的学习算法,因为我们相信学习算法和特征本身一样重要,可以有效地检索3D模型。同时,我们也不反对没有监督学习的方法。所以我们创建了两个子轨道,一个用于无监督方法,另一个用于有监督方法。为了测试监督方法对具有“看不见的”真实分类的查询的行为,我们在SHREC 2006中使用的原始查询集之外添加了一组新的查询集。
{"title":"SHape REtrieval contest 2008: Generic models","authors":"Ryutarou Ohbuchi","doi":"10.1109/SMI.2008.4547978","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547978","url":null,"abstract":"The first of the SHREC series of 3D model retrieval contests, SHREC 2006 [5] organized by Prof. Veltkamp et al. has made an impact in the way researchers compare performances of their 3D model retrieval methods. The task was to retrieve polygon soup models found in the Princeton Shape Benchmark database [5] having diverse shape and semantics. While many researchers used the SHREC 2006 as their benchmark, there has been no “official” contest since 2006 that used the same SHREC 2006 format but with up-to-date algorithms and methods. The SHREC 2007 added new tracks, e.g., for 3D face models, watertight models, protein models, CAD models, partial matching, and relevance feedback. However, the format of SHREC 2006 was missing. This SHREC 2008 Generic Models Track (GMT) tries to repeat the SHREC 2006 so that we can compare state-of-the-art methods for polygon soup models by using a stable benchmark dataset and ground truth classifications. A change from the SHREC 2006 to the SHREC 2008 GMT is the acknowledgement of learning based algorithms for 3D model retrieval. The SHREC 2008 GMT has two entry categories depending on if supervised learning is used or not. We wanted to encourage various forms of learning algorithms, as we believe learning algorithms are as essential as features themselves for effective 3D model retrieval. At the same time, we do not want to discourage methods without supervised learning. So we created two sub-tracks, one for unsupervised methods and the other for supervised methods. To test the behavior of supervised method for the queries having “unseen” ground truth classifications, we added a new set of queries, in addition to the original set of queries used in the SHREC 2006.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131134418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Example based skeletonization using harmonic one-forms 基于实例的骨架化使用谐波一形式
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547949
Ying He, Xian Xiao, S. H. Soon
This paper presents a method to extract skeletons using examples. Our method is based on the observation that many deformations in real world applications are isometric or near isometric. By taking advantage of the intrinsic property of harmonic 1-form, i.e., it is determined by the metric and independent of the resolution and embedding, our method can easily find a consistent mapping between the reference and example poses which can be in different resolutions and triangulations. We first construct the skeleton-like Reeb graph of a harmonic function defined on the given poses. Then by examining the changes of mean curvatures, we identify the initial locations of joints. Finally we refine the joint locations by solving a constrained optimization problem. To demonstrate the efficacy of our method, we apply the extracted skeletons to pose space deformation and skeleton transfer.
本文提出了一种利用实例提取骨架的方法。我们的方法是基于在实际应用中的许多变形是等距或近等距的观察。利用谐波1-形式的固有特性,即由度量决定,与分辨率和嵌入无关,我们的方法可以很容易地找到不同分辨率和三角测量下的参考姿态和示例姿态之间的一致映射。我们首先构造了在给定姿态上定义的调和函数的骨架状Reeb图。然后通过平均曲率的变化来确定关节的初始位置。最后,通过求解约束优化问题对关节位置进行优化。为了证明该方法的有效性,我们将提取的骨架应用于空间变形和骨架转移。
{"title":"Example based skeletonization using harmonic one-forms","authors":"Ying He, Xian Xiao, S. H. Soon","doi":"10.1109/SMI.2008.4547949","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547949","url":null,"abstract":"This paper presents a method to extract skeletons using examples. Our method is based on the observation that many deformations in real world applications are isometric or near isometric. By taking advantage of the intrinsic property of harmonic 1-form, i.e., it is determined by the metric and independent of the resolution and embedding, our method can easily find a consistent mapping between the reference and example poses which can be in different resolutions and triangulations. We first construct the skeleton-like Reeb graph of a harmonic function defined on the given poses. Then by examining the changes of mean curvatures, we identify the initial locations of joints. Finally we refine the joint locations by solving a constrained optimization problem. To demonstrate the efficacy of our method, we apply the extracted skeletons to pose space deformation and skeleton transfer.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131143797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
OCTOR: OCcurrence selecTOR in pattern hierarchies OCTOR:模式层次结构中的发生选择器
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547972
J. Jang, J. Rossignac
Hierarchies of patterns of features, of sub-assemblies, or of CSG sub-expressions are used in architectural and mechanical CAD to eliminate laborious repetitions from the design process. Yet, often the placement, shape, or even existence of a selection of the repeated occurrences in the pattern must be adjusted. The specification of a desired selection of occurrences in a hierarchy of patterns is often tedious (involving repetitive steps) or difficult (requiring interaction with an abstract representation of the hierarchy graph). The OCTOR system introduced here addresses these two drawbacks simultaneously, offering an effective and intuitive solution, which requires only two mouse-clicks to specify any one of a wide range of possible selections. It does not require expanding the graph or storing an explicit list of the selected occurrences and is simple to compute. It is hence well suited for a variety of CAD applications, including CSG, feature-based design, assembly mock-up, and animation. We discuss a novel representation of a selection, a technology that makes it possible to use only two mouse-clicks for each selection, and the persistence of these selections when the hierarchy of patterns is edited.
特征、子组件或CSG子表达式的模式层次结构在建筑和机械CAD中使用,以消除设计过程中费力的重复。然而,通常必须调整图案中重复出现的选择的位置、形状甚至存在。在模式层次结构中指定期望的出现选择通常是乏味的(涉及重复的步骤)或困难的(需要与层次图的抽象表示进行交互)。这里介绍的OCTOR系统同时解决了这两个缺点,提供了一种有效而直观的解决方案,只需单击两次鼠标即可指定广泛的可能选择中的任何一个。它不需要展开图或存储所选事件的显式列表,并且易于计算。因此,它非常适合各种CAD应用程序,包括CSG、基于特征的设计、装配模型和动画。我们将讨论一种新的选择表示、一种使得每次选择只需要单击两次鼠标的技术,以及在编辑模式层次结构时这些选择的持久性。
{"title":"OCTOR: OCcurrence selecTOR in pattern hierarchies","authors":"J. Jang, J. Rossignac","doi":"10.1109/SMI.2008.4547972","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547972","url":null,"abstract":"Hierarchies of patterns of features, of sub-assemblies, or of CSG sub-expressions are used in architectural and mechanical CAD to eliminate laborious repetitions from the design process. Yet, often the placement, shape, or even existence of a selection of the repeated occurrences in the pattern must be adjusted. The specification of a desired selection of occurrences in a hierarchy of patterns is often tedious (involving repetitive steps) or difficult (requiring interaction with an abstract representation of the hierarchy graph). The OCTOR system introduced here addresses these two drawbacks simultaneously, offering an effective and intuitive solution, which requires only two mouse-clicks to specify any one of a wide range of possible selections. It does not require expanding the graph or storing an explicit list of the selected occurrences and is simple to compute. It is hence well suited for a variety of CAD applications, including CSG, feature-based design, assembly mock-up, and animation. We discuss a novel representation of a selection, a technology that makes it possible to use only two mouse-clicks for each selection, and the persistence of these selections when the hierarchy of patterns is edited.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131573104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A 3D face matching framework 三维人脸匹配框架
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547956
F. T. Haar, R. Veltkamp
Many 3D face matching techniques have been developed to perform face recognition. Among these techniques are variants of 3D facial curve matching, which are techniques that reduce the amount of face data to one or a few 3D curves. The face's central profile, for instance, proved to work well. However, the selection of the optimal set of 3D curves and the best way to match them is still underexposed. We propose a 3D face matching framework that allows profile and contour based face matching. Using this framework we evaluate profile and contour types including those described in literature, and select subsets of facial curves for effective and efficient face matching. Results on the 3D face retrieval track of SHREC'07 (the 3D SHape Retrieval Contest) shows the highest mean average precision achieved so far, using only three facial curves of 45 samples each.
许多3D人脸匹配技术已经被开发出来进行人脸识别。在这些技术中有3D人脸曲线匹配的变体,即将人脸数据量减少到一条或几条3D曲线的技术。例如,脸部的中心轮廓被证明是有效的。然而,选择最优的3D曲线集和最佳的匹配方式仍然是曝光不足。我们提出了一个3D人脸匹配框架,允许基于轮廓和轮廓的人脸匹配。使用该框架,我们评估轮廓和轮廓类型,包括文献中描述的,并选择面部曲线子集进行有效和高效的面部匹配。在SHREC'07 (3D形状检索大赛)的三维人脸检索轨道上,仅使用了三条面部曲线,每条曲线45个样本,结果显示迄今为止获得的平均精度最高。
{"title":"A 3D face matching framework","authors":"F. T. Haar, R. Veltkamp","doi":"10.1109/SMI.2008.4547956","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547956","url":null,"abstract":"Many 3D face matching techniques have been developed to perform face recognition. Among these techniques are variants of 3D facial curve matching, which are techniques that reduce the amount of face data to one or a few 3D curves. The face's central profile, for instance, proved to work well. However, the selection of the optimal set of 3D curves and the best way to match them is still underexposed. We propose a 3D face matching framework that allows profile and contour based face matching. Using this framework we evaluate profile and contour types including those described in literature, and select subsets of facial curves for effective and efficient face matching. Results on the 3D face retrieval track of SHREC'07 (the 3D SHape Retrieval Contest) shows the highest mean average precision achieved so far, using only three facial curves of 45 samples each.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123847753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
SHape REtrieval Contest (SHREC) 2008 形状检索比赛(SHREC) 2008
Pub Date : 2008-06-04 DOI: 10.1109/SMI.2008.4547974
R. Veltkamp, F. T. Haar
Since 2006 the 3D shape retrieval contest SHREC has been organized. The general objective is to evaluate the effectiveness of 3D shape retrieval algorithms. 3D media retrieval is overlooked in most commercial search engines, while at the same time it is expected to represent a huge amount of traffic and data stored in the Internet. Recent advances in technology have made available cost-effective scanning devices that could not even be imagined a decade ago. It is now possible to acquire 3D data of a physical object in a few seconds and produce a digital model of its geometry that can be easily shared on the Internet. On the other hand, most PCs connected to the Internet are nowadays equipped with high-performance 3D graphics hardware, that support rendering, interaction and processing capabilities from home environments to enterprise scenarios.
从2006年开始,SHREC组织了三维形状检索大赛。总体目标是评估三维形状检索算法的有效性。3D媒体检索在大多数商业搜索引擎中都被忽视了,而与此同时,人们却期望它能代表互联网上存储的大量流量和数据。最近的技术进步使具有成本效益的扫描设备成为可能,这在十年前是无法想象的。现在可以在几秒钟内获得一个物理对象的三维数据,并产生一个可以在互联网上轻松共享的几何形状的数字模型。另一方面,现在大多数连接到互联网的个人电脑都配备了高性能的3D图形硬件,支持从家庭环境到企业场景的渲染、交互和处理能力。
{"title":"SHape REtrieval Contest (SHREC) 2008","authors":"R. Veltkamp, F. T. Haar","doi":"10.1109/SMI.2008.4547974","DOIUrl":"https://doi.org/10.1109/SMI.2008.4547974","url":null,"abstract":"Since 2006 the 3D shape retrieval contest SHREC has been organized. The general objective is to evaluate the effectiveness of 3D shape retrieval algorithms. 3D media retrieval is overlooked in most commercial search engines, while at the same time it is expected to represent a huge amount of traffic and data stored in the Internet. Recent advances in technology have made available cost-effective scanning devices that could not even be imagined a decade ago. It is now possible to acquire 3D data of a physical object in a few seconds and produce a digital model of its geometry that can be easily shared on the Internet. On the other hand, most PCs connected to the Internet are nowadays equipped with high-performance 3D graphics hardware, that support rendering, interaction and processing capabilities from home environments to enterprise scenarios.","PeriodicalId":118774,"journal":{"name":"2008 IEEE International Conference on Shape Modeling and Applications","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116801095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2008 IEEE International Conference on Shape Modeling and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1