首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
A prediction model on 3D model compression and its printed quality based on subjective study 基于主观研究的三维模型压缩及打印质量预测模型
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792639
T. Yamasaki, Y. Nakano, K. Aizawa
3D printing is becoming a more common technology and has a growing number of applications. Although 3D compression algorithms have been studied in the computer graphics (CG) community for decades, the quality of the compressed 3D models are discussed only in the CG space. In this paper, we discuss the relationship between the PSNR of the compressed 3D models and the human perception to the printed objects. We conducted subjective evaluation by inviting 13 people and found that there is a clear linear relationship between them. Such a quality perception model is useful for estimating the printing quality of the compressed 3D models and deciding reasonable compression parameters.
3D打印正在成为一种越来越普遍的技术,并且有越来越多的应用。虽然三维压缩算法在计算机图形学(CG)领域已经研究了几十年,但压缩后的三维模型的质量只在CG空间中讨论。在本文中,我们讨论了压缩三维模型的PSNR与人类对打印对象的感知之间的关系。我们邀请了13个人进行主观评价,发现他们之间存在明显的线性关系。这种质量感知模型有助于估计压缩后3D模型的打印质量和确定合理的压缩参数。
{"title":"A prediction model on 3D model compression and its printed quality based on subjective study","authors":"T. Yamasaki, Y. Nakano, K. Aizawa","doi":"10.1145/2787626.2792639","DOIUrl":"https://doi.org/10.1145/2787626.2792639","url":null,"abstract":"3D printing is becoming a more common technology and has a growing number of applications. Although 3D compression algorithms have been studied in the computer graphics (CG) community for decades, the quality of the compressed 3D models are discussed only in the CG space. In this paper, we discuss the relationship between the PSNR of the compressed 3D models and the human perception to the printed objects. We conducted subjective evaluation by inviting 13 people and found that there is a clear linear relationship between them. Such a quality perception model is useful for estimating the printing quality of the compressed 3D models and deciding reasonable compression parameters.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132284003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous and automatic registration of live RGBD video streams with partial overlapping views 连续和自动注册实时RGBD视频流与部分重叠视图
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792640
Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor
This paper presents a novel method for automatic registration of video streams originated from two depth-sensing cameras. The system consists of a sender and receiver, in which the sender obtains the streams from two RGBD sensors placed arbitrarily around a room and produces a unified scene as a registered point cloud. A conventional method to support a multi-depth sensor system is through calibration. However, calibration methods are time consuming and require the use of external markers prior to streaming. If the cameras are moved, calibration has to be repeated. The motivation of this work is to facilitate the use of RGBD sensors for non-expert users, so that cameras need not to be calibrated, and if cameras are moved, the system will automatically recover the alignment of the video streams. DeReEs [Seifi et al. 2014], a new registration algorithm, is used, since it is fast and successful in registering scenes with small overlapping sections.
本文提出了一种基于深度感测器的视频流自动配准方法。该系统由发送方和接收方组成,其中发送方从任意放置在房间周围的两个RGBD传感器获取流,并产生统一的场景作为配准点云。支持多深度传感器系统的传统方法是通过校准。然而,校准方法是耗时的,并且需要在流式传输之前使用外部标记。如果摄像机移动,则必须重复校准。这项工作的动机是为了方便非专业用户使用RGBD传感器,这样摄像机就不需要校准,如果摄像机被移动,系统将自动恢复视频流的对齐。使用了一种新的配准算法deees [Seifi et al. 2014],因为它可以快速成功地配准具有小重叠部分的场景。
{"title":"Continuous and automatic registration of live RGBD video streams with partial overlapping views","authors":"Afsaneh Rafighi, S. Seifi, Oscar E. Meruvia Pastor","doi":"10.1145/2787626.2792640","DOIUrl":"https://doi.org/10.1145/2787626.2792640","url":null,"abstract":"This paper presents a novel method for automatic registration of video streams originated from two depth-sensing cameras. The system consists of a sender and receiver, in which the sender obtains the streams from two RGBD sensors placed arbitrarily around a room and produces a unified scene as a registered point cloud. A conventional method to support a multi-depth sensor system is through calibration. However, calibration methods are time consuming and require the use of external markers prior to streaming. If the cameras are moved, calibration has to be repeated. The motivation of this work is to facilitate the use of RGBD sensors for non-expert users, so that cameras need not to be calibrated, and if cameras are moved, the system will automatically recover the alignment of the video streams. DeReEs [Seifi et al. 2014], a new registration algorithm, is used, since it is fast and successful in registering scenes with small overlapping sections.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"1182 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113988297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mid-air plus: a 2.5 D cross-sectional mid-air display with transparency control 空中加:一个2.5 D横断面空中显示透明控制
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792605
Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura
In design process and medical visualization, e.g. CT/MRI cross-sectional images, exterior and interior images can help users to understand the overall shape of volumetric objects. For this purpose, displays need to provide both vertical and horizontal images at the same time. To display cross-sectional images, an LCD display [Cassinelli et al. 2009] and image projection [Nagakura et al. 2006] have been proposed. Although these displays could show internal images of volumetric objects, seamless crossing of internal and external images cannot be realized since the images are limited to physical displays.
在设计过程和医学可视化中,例如CT/MRI横断面图像、外部和内部图像可以帮助用户了解体积物体的整体形状。为此,显示器需要同时提供垂直和水平图像。为了显示横截面图像,已经提出了LCD显示[Cassinelli et al. 2009]和图像投影[Nagakura et al. 2006]。虽然这些显示器可以显示体积物体的内部图像,但由于图像仅限于物理显示器,因此无法实现内部和外部图像的无缝交叉。
{"title":"Mid-air plus: a 2.5 D cross-sectional mid-air display with transparency control","authors":"Hiroki Yamamoto, H. Kajita, Hanyuool Kim, Naoya Koizumi, T. Naemura","doi":"10.1145/2787626.2792605","DOIUrl":"https://doi.org/10.1145/2787626.2792605","url":null,"abstract":"In design process and medical visualization, e.g. CT/MRI cross-sectional images, exterior and interior images can help users to understand the overall shape of volumetric objects. For this purpose, displays need to provide both vertical and horizontal images at the same time. To display cross-sectional images, an LCD display [Cassinelli et al. 2009] and image projection [Nagakura et al. 2006] have been proposed. Although these displays could show internal images of volumetric objects, seamless crossing of internal and external images cannot be realized since the images are limited to physical displays.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122080884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
atmoRefractor: spatial display by controlling heat haze atmoRefractor:通过控制热雾来实现空间显示
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792611
Toru Kawanabe, Tomoko Hashida
In recent years, there has been rapid development of techniques for superimposing virtual information on real-world scenes and changing the appearance of actual scenes in arbitrary ways. We are particularly interested in means of arbitrarily changing the appearance of real-world scenes without the use of physical interfaces such as glasses or other devices worn by the user. In this paper, we refer to such means as spatial displays. Typical examples of spatial displays include a system that can change the transparency or physical properties of buildings [Rekimoto, 2012] and a system that projects video images [Raskar, 2001]. However, those systems have restrictions such as requiring some kind of physical interface between the user and the scene or not being usable in a well-lit environment. Taking a different approach, we turned our attention to a natural phenomenon referred to as heat haze, in which the appearance of objects is altered by changes in the refractive index of air caused by differences in temperature distribution. We propose the atmoRefractor, a system that can generate and control heat haze on a small scale without an additional physical interface such as lenses. That locally controllable heat haze effect can be used to direct attention by changing the appearance of certain parts of scenes.
近年来,将虚拟信息叠加到真实场景中,任意改变真实场景的外观的技术得到了迅速发展。我们对任意改变现实世界场景外观的方法特别感兴趣,而无需使用用户佩戴的眼镜或其他设备等物理接口。在本文中,我们指的是空间显示等手段。空间显示的典型例子包括可以改变建筑物的透明度或物理特性的系统[Rekimoto, 2012]和投影视频图像的系统[Raskar, 2001]。然而,这些系统有一些限制,比如需要用户和场景之间的某种物理接口,或者不能在光线充足的环境中使用。采用一种不同的方法,我们将注意力转向了一种被称为热霾的自然现象,在这种现象中,由于温度分布的差异,空气的折射率发生了变化,从而改变了物体的外观。我们提出了一种大气折射系统,它可以在没有透镜等额外物理界面的情况下产生和控制小范围的热霾。局部可控的热霾效应可以通过改变场景某些部分的外观来引导注意力。
{"title":"atmoRefractor: spatial display by controlling heat haze","authors":"Toru Kawanabe, Tomoko Hashida","doi":"10.1145/2787626.2792611","DOIUrl":"https://doi.org/10.1145/2787626.2792611","url":null,"abstract":"In recent years, there has been rapid development of techniques for superimposing virtual information on real-world scenes and changing the appearance of actual scenes in arbitrary ways. We are particularly interested in means of arbitrarily changing the appearance of real-world scenes without the use of physical interfaces such as glasses or other devices worn by the user. In this paper, we refer to such means as spatial displays. Typical examples of spatial displays include a system that can change the transparency or physical properties of buildings [Rekimoto, 2012] and a system that projects video images [Raskar, 2001]. However, those systems have restrictions such as requiring some kind of physical interface between the user and the scene or not being usable in a well-lit environment. Taking a different approach, we turned our attention to a natural phenomenon referred to as heat haze, in which the appearance of objects is altered by changes in the refractive index of air caused by differences in temperature distribution. We propose the atmoRefractor, a system that can generate and control heat haze on a small scale without an additional physical interface such as lenses. That locally controllable heat haze effect can be used to direct attention by changing the appearance of certain parts of scenes.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125746820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image based relighting using room lighting basis 基于图像的照明使用室内照明基础
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792618
Antoine Toisoul, A. Ghosh
We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.
我们提出了一种基于图像的重新照明的新方法,使用常规房间中可用的照明控制。我们使用房间内可用的单个光源,如窗户和室内灯作为基本照明条件。我们进一步优化了期望的照明环境投射到稀疏的房间照明基础上,以便与给定的照明基础紧密接近目标照明环境。我们获得了可信的再现结果,与反射场密集采样的地面真值再现相媲美。
{"title":"Image based relighting using room lighting basis","authors":"Antoine Toisoul, A. Ghosh","doi":"10.1145/2787626.2792618","DOIUrl":"https://doi.org/10.1145/2787626.2792618","url":null,"abstract":"We present a novel approach for image based relighting using the lighting controls available in a regular room. We employ individual light sources available in the room such as windows and house lights as basis lighting conditions. We further optimize the projection of a desired lighting environment into the sparse room lighting basis in order to closely approximate the target lighting environment with the given lighting basis. We achieve plausible relit results that compare favourably with ground truth relighting with dense sampling of the reflectance field.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122778705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction of a smooth surface from voxels preserving sharp creases 从保留尖锐折痕的体素中提取光滑表面
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792635
Kazutaka Nakashima, T. Igarashi
Construction of a free-form 3D surface model is still difficult. However, in our point of view, construction of a simple voxel model is relatively easy because it can be built with blocks. Even small children can build a voxel model. We present a method to convert a voxel model into a free-form surface model in order to facilitate construction of surface models.
构建自由曲面模型仍然是一个困难的问题。然而,在我们看来,构建一个简单的体素模型是相对容易的,因为它可以用块来构建。即使是小孩子也可以建立一个体素模型。为了便于曲面模型的构造,提出了一种将体素模型转换为自由曲面模型的方法。
{"title":"Extraction of a smooth surface from voxels preserving sharp creases","authors":"Kazutaka Nakashima, T. Igarashi","doi":"10.1145/2787626.2792635","DOIUrl":"https://doi.org/10.1145/2787626.2792635","url":null,"abstract":"Construction of a free-form 3D surface model is still difficult. However, in our point of view, construction of a simple voxel model is relatively easy because it can be built with blocks. Even small children can build a voxel model. We present a method to convert a voxel model into a free-form surface model in order to facilitate construction of surface models.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130842079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Virtual headcam: pan/tilt mirror-based facial performance tracking 虚拟头摄像头:基于平移/倾斜镜的面部表现跟踪
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792625
Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec
High-end facial performance capture solutions typically use head-mounted camera systems which provide one or more close-up video streams of each actor's performance. These provide clear views of each actor's performance, but can be bulky, uncomfortable, get in the way of sight lines, and prevent actors from getting close to each other. To address this, we propose a virtual head-mounted camera system: an array of cameras placed around around the performance capture volume which automatically track zoomed-in, sharply focussed, high-resolution views of the each actor's face from a multitude of directions. The resulting imagery can be used in conjunction with body motion capture data to derive nuanced facial performances without head-mounted cameras.
高端面部表演捕捉解决方案通常使用头戴式摄像机系统,该系统提供每个演员表演的一个或多个特写视频流。这些可以清晰地看到每个演员的表演,但可能笨重,不舒服,挡住视线,并阻止演员彼此靠近。为了解决这个问题,我们提出了一个虚拟的头戴式摄像机系统:一组摄像机放置在表演捕捉体周围,自动跟踪从多个方向放大,锐利聚焦,高分辨率的每个演员面部视图。由此产生的图像可以与身体动作捕捉数据结合使用,在没有头戴式摄像头的情况下获得细致入微的面部表现。
{"title":"Virtual headcam: pan/tilt mirror-based facial performance tracking","authors":"Xueming Yu, Shanhe Wang, Jay Busch, Thai-Binh Phan, Tracy McSheery, M. Bolas, P. Debevec","doi":"10.1145/2787626.2792625","DOIUrl":"https://doi.org/10.1145/2787626.2792625","url":null,"abstract":"High-end facial performance capture solutions typically use head-mounted camera systems which provide one or more close-up video streams of each actor's performance. These provide clear views of each actor's performance, but can be bulky, uncomfortable, get in the way of sight lines, and prevent actors from getting close to each other. To address this, we propose a virtual head-mounted camera system: an array of cameras placed around around the performance capture volume which automatically track zoomed-in, sharply focussed, high-resolution views of the each actor's face from a multitude of directions. The resulting imagery can be used in conjunction with body motion capture data to derive nuanced facial performances without head-mounted cameras.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114410602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WAOH: virtual automotive HMI evaluation tool WAOH:虚拟汽车人机界面评估工具
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792602
Seunghyun Woo, Daeyun An, Jong-Gab Oh, G. Hong
Due to various features being available in the vehicle such as multimedia, the dashboard has become rather complicated. Therefore, an increased need for HMI(Human Machine Interface) research has arisen in the design creation process. However, there are issues such as design changes occurring even after the design is selected due to the initial evaluation being too simple to cover all of the requirements. Designers do not consider carefully HMI the during sketching phase and issues with designs are discovered too far along in the process. This study suggests an HMI simulation tool system based on projection to pre-evaluate an HMI prior to selecting specifications through virtual function implementation. This system evaluates each function of centerfacia through quantitative criteria such as performance time and distraction time. As a result, the objective of the system is to quickly analyze and validate designs through virtual means and find interface issues with a quantitative method.
由于汽车中有各种各样的功能,比如多媒体,仪表盘变得相当复杂。因此,在设计创作过程中,对HMI(人机界面)研究的需求日益增加。然而,由于最初的评估过于简单,无法涵盖所有需求,因此即使在选择了设计之后,也会出现设计更改等问题。设计师没有在草图阶段仔细考虑人机界面,设计的问题在这个过程中被发现得太远了。本研究提出一个基于投影的人机界面仿真工具系统,通过虚拟功能实现在选择规格之前对人机界面进行预评估。该系统通过性能时间和分散时间等定量标准来评价中心面的各项功能。因此,该系统的目标是通过虚拟手段快速分析和验证设计,并以定量的方法发现接口问题。
{"title":"WAOH: virtual automotive HMI evaluation tool","authors":"Seunghyun Woo, Daeyun An, Jong-Gab Oh, G. Hong","doi":"10.1145/2787626.2792602","DOIUrl":"https://doi.org/10.1145/2787626.2792602","url":null,"abstract":"Due to various features being available in the vehicle such as multimedia, the dashboard has become rather complicated. Therefore, an increased need for HMI(Human Machine Interface) research has arisen in the design creation process. However, there are issues such as design changes occurring even after the design is selected due to the initial evaluation being too simple to cover all of the requirements. Designers do not consider carefully HMI the during sketching phase and issues with designs are discovered too far along in the process. This study suggests an HMI simulation tool system based on projection to pre-evaluate an HMI prior to selecting specifications through virtual function implementation. This system evaluates each function of centerfacia through quantitative criteria such as performance time and distraction time. As a result, the objective of the system is to quickly analyze and validate designs through virtual means and find interface issues with a quantitative method.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124199722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wobble strings: spatially divided stroboscopic effect for augmenting wobbly motion of stringed instruments 摇摆弦:空间分割频闪效应,增强弦乐器的摇摆运动
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792603
S. Fukushima, T. Naemura
When we snap strings playing with a CMOS camera, the strings seems to vibrate in a wobbly slow motion pattern. Because a CMOS sensor scans one line of video in sequence, fast moving objects are distorted during the scanning sequence. The morphing and distorting are called a rolling shutter effect, which is considered to be an artistic photographic techniques like strip photography and slit-scan photography. However, the effect can only be seen on a camera finder or a PC screen; the guitar player and audience are quite unlikely to notice it by the naked eye.
当我们用CMOS相机捕捉琴弦时,琴弦似乎以一种摇摆的慢动作模式振动。由于CMOS传感器按顺序扫描一行视频,快速移动的物体在扫描过程中会被扭曲。这种变形和扭曲被称为滚动快门效应,被认为是一种艺术摄影技术,就像条纹摄影和裂隙扫描摄影一样。然而,这种效果只能在相机取景器或个人电脑屏幕上看到;吉他手和观众不太可能用肉眼注意到它。
{"title":"Wobble strings: spatially divided stroboscopic effect for augmenting wobbly motion of stringed instruments","authors":"S. Fukushima, T. Naemura","doi":"10.1145/2787626.2792603","DOIUrl":"https://doi.org/10.1145/2787626.2792603","url":null,"abstract":"When we snap strings playing with a CMOS camera, the strings seems to vibrate in a wobbly slow motion pattern. Because a CMOS sensor scans one line of video in sequence, fast moving objects are distorted during the scanning sequence. The morphing and distorting are called a rolling shutter effect, which is considered to be an artistic photographic techniques like strip photography and slit-scan photography. However, the effect can only be seen on a camera finder or a PC screen; the guitar player and audience are quite unlikely to notice it by the naked eye.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121900219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Feature extraction on digital snow microstructures 数字积雪微结构特征提取
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792637
Jérémy Levallois, D. Coeurjolly, J. Lachaud
During a snowfall, the snow crystals accumulate on the ground and gradually form a complex porous medium constituted of air, water vapour, ice and sometimes liquid water. This ground-lying snow transforms with time, depending on the physical parameters of the environment. The main purpose of the digitalSnow project is to provide efficient computational tools to study the metamorphism of real snow microstructures from 3D images acquired using X tomography techniques. We design 3D image-based numerical models than can simulate the shape evolution of the snow microstructure during its metamorphism. As a key measurement, (mean) curvature of snow microstructure boundary plays a crucial role in metamorphosis equations (mostly driven by mean curvature flow). In our previous work, we have proposed robust 2D curvature and 3D mean and principal curvatures estimators using integral invariants. In short, curvature quantities are estimated using a spherical convolution kernel with given radius R applied on point surfaces [Coeurjolly et al. 2014]. The specific aspect of these estimators is that they are defined on (isothetic) digital surfaces (boundary of shape in Z3). Tailored for this digital model, these estimators allow us to mathematically prove their multigrid convergence, i.e. for a class of mathematical shapes (e.g. C3-boundary and bounded positive curvature), the estimated quantity converges to the underlying Euclidean one when shapes are digitized on grids with gridstep tending to zero. In this work, we propose to use the radius R of our curvature estimators as a scale-space parameter to extract features on digital shapes. Many feature estimators exist in the literature, either on point clouds or meshes ("ridge-valley", threshold on principal curvatures, spectral analysis from Laplacian matrix eigenvalues, . . . ). In the context of objects in Z3 and using our robust curvature estimator, we define a new feature extraction approach on which theoretical results can be proven in the multigrid framework.
降雪时,雪晶积聚在地面上,逐渐形成由空气、水蒸气、冰(有时还有液态水)组成的复杂多孔介质。地面上的雪会随着时间而变化,这取决于环境的物理参数。digitalSnow项目的主要目的是提供有效的计算工具,从使用X层析成像技术获得的3D图像中研究真实积雪微观结构的变质作用。我们设计了基于三维图像的数值模型,可以模拟积雪变形过程中微观结构的形状演变。雪微观结构边界(平均)曲率作为一项关键测量,在变形方程(主要由平均曲率流驱动)中起着至关重要的作用。在我们之前的工作中,我们提出了使用积分不变量的鲁棒二维曲率和三维平均曲率和主曲率估计器。简而言之,曲率量是使用给定半径为R的球面卷积核在点表面上估计的[Coeurjolly et al. 2014]。这些估计量的具体方面是它们在(等等值)数字表面(Z3中的形状边界)上定义。为这个数字模型量身定制,这些估计器允许我们在数学上证明它们的多网格收敛性,即对于一类数学形状(例如c3边界和有界正曲率),当形状在网格上数字化且网格步长趋于零时,估计量收敛到底层欧几里德量。在这项工作中,我们建议使用曲率估计器的半径R作为尺度空间参数来提取数字形状上的特征。文献中存在许多特征估计器,无论是点云还是网格(“脊谷”,主曲率的阈值,拉普拉斯矩阵特征值的谱分析,等等)。. 在Z3中的对象背景下,使用我们的鲁棒曲率估计器,我们定义了一种新的特征提取方法,该方法的理论结果可以在多网格框架中得到证明。
{"title":"Feature extraction on digital snow microstructures","authors":"Jérémy Levallois, D. Coeurjolly, J. Lachaud","doi":"10.1145/2787626.2792637","DOIUrl":"https://doi.org/10.1145/2787626.2792637","url":null,"abstract":"During a snowfall, the snow crystals accumulate on the ground and gradually form a complex porous medium constituted of air, water vapour, ice and sometimes liquid water. This ground-lying snow transforms with time, depending on the physical parameters of the environment. The main purpose of the digitalSnow project is to provide efficient computational tools to study the metamorphism of real snow microstructures from 3D images acquired using X tomography techniques. We design 3D image-based numerical models than can simulate the shape evolution of the snow microstructure during its metamorphism. As a key measurement, (mean) curvature of snow microstructure boundary plays a crucial role in metamorphosis equations (mostly driven by mean curvature flow). In our previous work, we have proposed robust 2D curvature and 3D mean and principal curvatures estimators using integral invariants. In short, curvature quantities are estimated using a spherical convolution kernel with given radius R applied on point surfaces [Coeurjolly et al. 2014]. The specific aspect of these estimators is that they are defined on (isothetic) digital surfaces (boundary of shape in Z3). Tailored for this digital model, these estimators allow us to mathematically prove their multigrid convergence, i.e. for a class of mathematical shapes (e.g. C3-boundary and bounded positive curvature), the estimated quantity converges to the underlying Euclidean one when shapes are digitized on grids with gridstep tending to zero. In this work, we propose to use the radius R of our curvature estimators as a scale-space parameter to extract features on digital shapes. Many feature estimators exist in the literature, either on point clouds or meshes (\"ridge-valley\", threshold on principal curvatures, spectral analysis from Laplacian matrix eigenvalues, . . . ). In the context of objects in Z3 and using our robust curvature estimator, we define a new feature extraction approach on which theoretical results can be proven in the multigrid framework.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122861294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1