3D printing is becoming a more common technology and has a growing number of applications. Although 3D compression algorithms have been studied in the computer graphics (CG) community for decades, the quality of the compressed 3D models are discussed only in the CG space. In this paper, we discuss the relationship between the PSNR of the compressed 3D models and the human perception to the printed objects. We conducted subjective evaluation by inviting 13 people and found that there is a clear linear relationship between them. Such a quality perception model is useful for estimating the printing quality of the compressed 3D models and deciding reasonable compression parameters.
{"title":"A prediction model on 3D model compression and its printed quality based on subjective study","authors":"T. Yamasaki, Y. Nakano, K. Aizawa","doi":"10.1145/2787626.2792639","DOIUrl":"https://doi.org/10.1145/2787626.2792639","url":null,"abstract":"3D printing is becoming a more common technology and has a growing number of applications. Although 3D compression algorithms have been studied in the computer graphics (CG) community for decades, the quality of the compressed 3D models are discussed only in the CG space. In this paper, we discuss the relationship between the PSNR of the compressed 3D models and the human perception to the printed objects. We conducted subjective evaluation by inviting 13 people and found that there is a clear linear relationship between them. Such a quality perception model is useful for estimating the printing quality of the compressed 3D models and deciding reasonable compression parameters.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor
This paper presents a novel method for automatic registration of video streams originated from two depth-sensing cameras. The system consists of a sender and receiver, in which the sender obtains the streams from two RGBD sensors placed arbitrarily around a room and produces a unified scene as a registered point cloud. A conventional method to support a multi-depth sensor system is through calibration. However, calibration methods are time consuming and require the use of external markers prior to streaming. If the cameras are moved, calibration has to be repeated. The motivation of this work is to facilitate the use of RGBD sensors for non-expert users, so that cameras need not to be calibrated, and if cameras are moved, the system will automatically recover the alignment of the video streams. DeReEs [Seifi et al. 2014], a new registration algorithm, is used, since it is fast and successful in registering scenes with small overlapping sections.
本文提出了一种基于深度感测器的视频流自动配准方法。该系统由发送方和接收方组成,其中发送方从任意放置在房间周围的两个RGBD传感器获取流,并产生统一的场景作为配准点云。支持多深度传感器系统的传统方法是通过校准。然而,校准方法是耗时的,并且需要在流式传输之前使用外部标记。如果摄像机移动,则必须重复校准。这项工作的动机是为了方便非专业用户使用RGBD传感器,这样摄像机就不需要校准,如果摄像机被移动,系统将自动恢复视频流的对齐。使用了一种新的配准算法deees [Seifi et al. 2014],因为它可以快速成功地配准具有小重叠部分的场景。
{"title":"Continuous and automatic registration of live RGBD video streams with partial overlapping views","authors":"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor","doi":"10.1145/2787626.2792640","DOIUrl":"https://doi.org/10.1145/2787626.2792640","url":null,"abstract":"This paper presents a novel method for automatic registration of video streams originated from two depth-sensing cameras. The system consists of a sender and receiver, in which the sender obtains the streams from two RGBD sensors placed arbitrarily around a room and produces a unified scene as a registered point cloud. A conventional method to support a multi-depth sensor system is through calibration. However, calibration methods are time consuming and require the use of external markers prior to streaming. If the cameras are moved, calibration has to be repeated. The motivation of this work is to facilitate the use of RGBD sensors for non-expert users, so that cameras need not to be calibrated, and if cameras are moved, the system will automatically recover the alignment of the video streams. DeReEs [Seifi et al. 2014], a new registration algorithm, is used, since it is fast and successful in registering scenes with small overlapping sections.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"1182 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113988297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura
In design process and medical visualization, e.g. CT/MRI cross-sectional images, exterior and interior images can help users to understand the overall shape of volumetric objects. For this purpose, displays need to provide both vertical and horizontal images at the same time. To display cross-sectional images, an LCD display [Cassinelli et al. 2009] and image projection [Nagakura et al. 2006] have been proposed. Although these displays could show internal images of volumetric objects, seamless crossing of internal and external images cannot be realized since the images are limited to physical displays.
在设计过程和医学可视化中,例如CT/MRI横断面图像、外部和内部图像可以帮助用户了解体积物体的整体形状。为此,显示器需要同时提供垂直和水平图像。为了显示横截面图像,已经提出了LCD显示[Cassinelli et al. 2009]和图像投影[Nagakura et al. 2006]。虽然这些显示器可以显示体积物体的内部图像,但由于图像仅限于物理显示器,因此无法实现内部和外部图像的无缝交叉。
{"title":"Mid-air plus: a 2.5 D cross-sectional mid-air display with transparency control","authors":"Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura","doi":"10.1145/2787626.2792605","DOIUrl":"https://doi.org/10.1145/2787626.2792605","url":null,"abstract":"In design process and medical visualization, e.g. CT/MRI cross-sectional images, exterior and interior images can help users to understand the overall shape of volumetric objects. For this purpose, displays need to provide both vertical and horizontal images at the same time. To display cross-sectional images, an LCD display [Cassinelli et al. 2009] and image projection [Nagakura et al. 2006] have been proposed. Although these displays could show internal images of volumetric objects, seamless crossing of internal and external images cannot be realized since the images are limited to physical displays.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122080884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been rapid development of techniques for superimposing virtual information on real-world scenes and changing the appearance of actual scenes in arbitrary ways. We are particularly interested in means of arbitrarily changing the appearance of real-world scenes without the use of physical interfaces such as glasses or other devices worn by the user. In this paper, we refer to such means as spatial displays. Typical examples of spatial displays include a system that can change the transparency or physical properties of buildings [Rekimoto, 2012] and a system that projects video images [Raskar, 2001]. However, those systems have restrictions such as requiring some kind of physical interface between the user and the scene or not being usable in a well-lit environment. Taking a different approach, we turned our attention to a natural phenomenon referred to as heat haze, in which the appearance of objects is altered by changes in the refractive index of air caused by differences in temperature distribution. We propose the atmoRefractor, a system that can generate and control heat haze on a small scale without an additional physical interface such as lenses. That locally controllable heat haze effect can be used to direct attention by changing the appearance of certain parts of scenes.
{"title":"atmoRefractor: spatial display by controlling heat haze","authors":"Toru Kawanabe, Tomoko Hashida","doi":"10.1145/2787626.2792611","DOIUrl":"https://doi.org/10.1145/2787626.2792611","url":null,"abstract":"In recent years, there has been rapid development of techniques for superimposing virtual information on real-world scenes and changing the appearance of actual scenes in arbitrary ways. We are particularly interested in means of arbitrarily changing the appearance of real-world scenes without the use of physical interfaces such as glasses or other devices worn by the user. In this paper, we refer to such means as spatial displays. Typical examples of spatial displays include a system that can change the transparency or physical properties of buildings [Rekimoto, 2012] and a system that projects video images [Raskar, 2001]. However, those systems have restrictions such as requiring some kind of physical interface between the user and the scene or not being usable in a well-lit environment. Taking a different approach, we turned our attention to a natural phenomenon referred to as heat haze, in which the appearance of objects is altered by changes in the refractive index of air caused by differences in temperature distribution. We propose the atmoRefractor, a system that can generate and control heat haze on a small scale without an additional physical interface such as lenses. That locally controllable heat haze effect can be used to direct attention by changing the appearance of certain parts of scenes.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125746820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.
{"title":"Image based relighting using room lighting basis","authors":"Antoine Toisoul, A. Ghosh","doi":"10.1145/2787626.2792618","DOIUrl":"https://doi.org/10.1145/2787626.2792618","url":null,"abstract":"We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Construction of a free-form 3D surface model is still difficult. However, in our point of view, construction of a simple voxel model is relatively easy because it can be built with blocks. Even small children can build a voxel model. We present a method to convert a voxel model into a free-form surface model in order to facilitate construction of surface models.
{"title":"Extraction of a smooth surface from voxels preserving sharp creases","authors":"Kazutaka Nakashima, T. Igarashi","doi":"10.1145/2787626.2792635","DOIUrl":"https://doi.org/10.1145/2787626.2792635","url":null,"abstract":"Construction of a free-form 3D surface model is still difficult. However, in our point of view, construction of a simple voxel model is relatively easy because it can be built with blocks. Even small children can build a voxel model. We present a method to convert a voxel model into a free-form surface model in order to facilitate construction of surface models.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec
High-end facial performance capture solutions typically use head-mounted camera systems which provide one or more close-up video streams of each actor's performance. These provide clear views of each actor's performance, but can be bulky, uncomfortable, get in the way of sight lines, and prevent actors from getting close to each other. To address this, we propose a virtual head-mounted camera system: an array of cameras placed around around the performance capture volume which automatically track zoomed-in, sharply focussed, high-resolution views of the each actor's face from a multitude of directions. The resulting imagery can be used in conjunction with body motion capture data to derive nuanced facial performances without head-mounted cameras.
{"title":"Virtual headcam: pan/tilt mirror-based facial performance tracking","authors":"Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec","doi":"10.1145/2787626.2792625","DOIUrl":"https://doi.org/10.1145/2787626.2792625","url":null,"abstract":"High-end facial performance capture solutions typically use head-mounted camera systems which provide one or more close-up video streams of each actor's performance. These provide clear views of each actor's performance, but can be bulky, uncomfortable, get in the way of sight lines, and prevent actors from getting close to each other. To address this, we propose a virtual head-mounted camera system: an array of cameras placed around around the performance capture volume which automatically track zoomed-in, sharply focussed, high-resolution views of the each actor's face from a multitude of directions. The resulting imagery can be used in conjunction with body motion capture data to derive nuanced facial performances without head-mounted cameras.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114410602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to various features being available in the vehicle such as multimedia, the dashboard has become rather complicated. Therefore, an increased need for HMI(Human Machine Interface) research has arisen in the design creation process. However, there are issues such as design changes occurring even after the design is selected due to the initial evaluation being too simple to cover all of the requirements. Designers do not consider carefully HMI the during sketching phase and issues with designs are discovered too far along in the process. This study suggests an HMI simulation tool system based on projection to pre-evaluate an HMI prior to selecting specifications through virtual function implementation. This system evaluates each function of centerfacia through quantitative criteria such as performance time and distraction time. As a result, the objective of the system is to quickly analyze and validate designs through virtual means and find interface issues with a quantitative method.
{"title":"WAOH: virtual automotive HMI evaluation tool","authors":"Seunghyun Woo, Daeyun An, Jong-Gab Oh, G. Hong","doi":"10.1145/2787626.2792602","DOIUrl":"https://doi.org/10.1145/2787626.2792602","url":null,"abstract":"Due to various features being available in the vehicle such as multimedia, the dashboard has become rather complicated. Therefore, an increased need for HMI(Human Machine Interface) research has arisen in the design creation process. However, there are issues such as design changes occurring even after the design is selected due to the initial evaluation being too simple to cover all of the requirements. Designers do not consider carefully HMI the during sketching phase and issues with designs are discovered too far along in the process. This study suggests an HMI simulation tool system based on projection to pre-evaluate an HMI prior to selecting specifications through virtual function implementation. This system evaluates each function of centerfacia through quantitative criteria such as performance time and distraction time. As a result, the objective of the system is to quickly analyze and validate designs through virtual means and find interface issues with a quantitative method.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124199722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When we snap strings playing with a CMOS camera, the strings seems to vibrate in a wobbly slow motion pattern. Because a CMOS sensor scans one line of video in sequence, fast moving objects are distorted during the scanning sequence. The morphing and distorting are called a rolling shutter effect, which is considered to be an artistic photographic techniques like strip photography and slit-scan photography. However, the effect can only be seen on a camera finder or a PC screen; the guitar player and audience are quite unlikely to notice it by the naked eye.
{"title":"Wobble strings: spatially divided stroboscopic effect for augmenting wobbly motion of stringed instruments","authors":"S. Fukushima, T. Naemura","doi":"10.1145/2787626.2792603","DOIUrl":"https://doi.org/10.1145/2787626.2792603","url":null,"abstract":"When we snap strings playing with a CMOS camera, the strings seems to vibrate in a wobbly slow motion pattern. Because a CMOS sensor scans one line of video in sequence, fast moving objects are distorted during the scanning sequence. The morphing and distorting are called a rolling shutter effect, which is considered to be an artistic photographic techniques like strip photography and slit-scan photography. However, the effect can only be seen on a camera finder or a PC screen; the guitar player and audience are quite unlikely to notice it by the naked eye.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During a snowfall, the snow crystals accumulate on the ground and gradually form a complex porous medium constituted of air, water vapour, ice and sometimes liquid water. This ground-lying snow transforms with time, depending on the physical parameters of the environment. The main purpose of the digitalSnow project is to provide efficient computational tools to study the metamorphism of real snow microstructures from 3D images acquired using X tomography techniques. We design 3D image-based numerical models than can simulate the shape evolution of the snow microstructure during its metamorphism. As a key measurement, (mean) curvature of snow microstructure boundary plays a crucial role in metamorphosis equations (mostly driven by mean curvature flow). In our previous work, we have proposed robust 2D curvature and 3D mean and principal curvatures estimators using integral invariants. In short, curvature quantities are estimated using a spherical convolution kernel with given radius R applied on point surfaces [Coeurjolly et al. 2014]. The specific aspect of these estimators is that they are defined on (isothetic) digital surfaces (boundary of shape in Z3). Tailored for this digital model, these estimators allow us to mathematically prove their multigrid convergence, i.e. for a class of mathematical shapes (e.g. C3-boundary and bounded positive curvature), the estimated quantity converges to the underlying Euclidean one when shapes are digitized on grids with gridstep tending to zero. In this work, we propose to use the radius R of our curvature estimators as a scale-space parameter to extract features on digital shapes. Many feature estimators exist in the literature, either on point clouds or meshes ("ridge-valley", threshold on principal curvatures, spectral analysis from Laplacian matrix eigenvalues, . . . ). In the context of objects in Z3 and using our robust curvature estimator, we define a new feature extraction approach on which theoretical results can be proven in the multigrid framework.
降雪时,雪晶积聚在地面上,逐渐形成由空气、水蒸气、冰(有时还有液态水)组成的复杂多孔介质。地面上的雪会随着时间而变化,这取决于环境的物理参数。digitalSnow项目的主要目的是提供有效的计算工具,从使用X层析成像技术获得的3D图像中研究真实积雪微观结构的变质作用。我们设计了基于三维图像的数值模型,可以模拟积雪变形过程中微观结构的形状演变。雪微观结构边界(平均)曲率作为一项关键测量,在变形方程(主要由平均曲率流驱动)中起着至关重要的作用。在我们之前的工作中,我们提出了使用积分不变量的鲁棒二维曲率和三维平均曲率和主曲率估计器。简而言之,曲率量是使用给定半径为R的球面卷积核在点表面上估计的[Coeurjolly et al. 2014]。这些估计量的具体方面是它们在(等等值)数字表面(Z3中的形状边界)上定义。为这个数字模型量身定制,这些估计器允许我们在数学上证明它们的多网格收敛性,即对于一类数学形状(例如c3边界和有界正曲率),当形状在网格上数字化且网格步长趋于零时,估计量收敛到底层欧几里德量。在这项工作中,我们建议使用曲率估计器的半径R作为尺度空间参数来提取数字形状上的特征。文献中存在许多特征估计器,无论是点云还是网格(“脊谷”,主曲率的阈值,拉普拉斯矩阵特征值的谱分析,等等)。. 在Z3中的对象背景下,使用我们的鲁棒曲率估计器,我们定义了一种新的特征提取方法,该方法的理论结果可以在多网格框架中得到证明。
{"title":"Feature extraction on digital snow microstructures","authors":"Jérémy Levallois, D. Coeurjolly, J. Lachaud","doi":"10.1145/2787626.2792637","DOIUrl":"https://doi.org/10.1145/2787626.2792637","url":null,"abstract":"During a snowfall, the snow crystals accumulate on the ground and gradually form a complex porous medium constituted of air, water vapour, ice and sometimes liquid water. This ground-lying snow transforms with time, depending on the physical parameters of the environment. The main purpose of the digitalSnow project is to provide efficient computational tools to study the metamorphism of real snow microstructures from 3D images acquired using X tomography techniques. We design 3D image-based numerical models than can simulate the shape evolution of the snow microstructure during its metamorphism. As a key measurement, (mean) curvature of snow microstructure boundary plays a crucial role in metamorphosis equations (mostly driven by mean curvature flow). In our previous work, we have proposed robust 2D curvature and 3D mean and principal curvatures estimators using integral invariants. In short, curvature quantities are estimated using a spherical convolution kernel with given radius R applied on point surfaces [Coeurjolly et al. 2014]. The specific aspect of these estimators is that they are defined on (isothetic) digital surfaces (boundary of shape in Z3). Tailored for this digital model, these estimators allow us to mathematically prove their multigrid convergence, i.e. for a class of mathematical shapes (e.g. C3-boundary and bounded positive curvature), the estimated quantity converges to the underlying Euclidean one when shapes are digitized on grids with gridstep tending to zero. In this work, we propose to use the radius R of our curvature estimators as a scale-space parameter to extract features on digital shapes. Many feature estimators exist in the literature, either on point clouds or meshes (\"ridge-valley\", threshold on principal curvatures, spectral analysis from Laplacian matrix eigenvalues, . . . ). In the context of objects in Z3 and using our robust curvature estimator, we define a new feature extraction approach on which theoretical results can be proven in the multigrid framework.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122861294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}