首页 > 最新文献

SIGGRAPH Asia 2019 Posters最新文献

英文 中文
Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment 弹出式数字桌面:在桌面环境中无缝集成2D和3D可视化
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364571
Daisuke Inagaki, Yucheng Qiu, Raku Egawa, Takashi Ijiri
We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment. By combining a digital tabletop display of 2D contents with a light-field display, we can visualize a part of the 2D contents in 3D. Users of our system can overview the contents in their 2D representation, then observe a detail of the contents in the 3D visualization. The feasibility of our system is demonstrated on two applications, one for browsing cityscapes, the other for viewing insect specimens.
我们提出了一个弹出式数字桌面系统,该系统无缝集成了数字桌面环境中内容的二维(2D)和三维(3D)表示。通过将2D内容的数字桌面显示与光场显示相结合,我们可以将2D内容的一部分可视化为3D。我们的系统用户可以在他们的2D表示中概览内容,然后在3D可视化中观察内容的细节。该系统的可行性在两个应用中得到了验证,一个用于浏览城市景观,另一个用于观察昆虫标本。
{"title":"Pop-up digital tabletop: seamless integration of 2D and 3D visualizations in a tabletop environment","authors":"Daisuke Inagaki, Yucheng Qiu, Raku Egawa, Takashi Ijiri","doi":"10.1145/3355056.3364571","DOIUrl":"https://doi.org/10.1145/3355056.3364571","url":null,"abstract":"We propose a pop-up digital tabletop system that seamlessly integrates two-dimensional (2D) and three-dimensional (3D) representations of contents in a digital tabletop environment. By combining a digital tabletop display of 2D contents with a light-field display, we can visualize a part of the 2D contents in 3D. Users of our system can overview the contents in their 2D representation, then observe a detail of the contents in the 3D visualization. The feasibility of our system is demonstrated on two applications, one for browsing cityscapes, the other for viewing insect specimens.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"21 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115707194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AUDIOZOOM: Location Based Sound Delivery system AUDIOZOOM:基于位置的声音传送系统
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364596
Chinmay Rajguru, Daniel Blaszczak, A. Pouryazdan, T. J. Graham, G. Memoli
{"title":"AUDIOZOOM: Location Based Sound Delivery system","authors":"Chinmay Rajguru, Daniel Blaszczak, A. Pouryazdan, T. J. Graham, G. Memoli","doi":"10.1145/3355056.3364596","DOIUrl":"https://doi.org/10.1145/3355056.3364596","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"268 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123112060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Midair Haptic Representation for Internal Structure in Volumetric Data Visualization 体数据可视化中内部结构的空中触觉表示
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364584
T. Takashina, Mitsuru Ito, Yuji Kokumai
In this paper, we propose a method to perceive the internal structure of volumetric data using midair haptics. In this method, we render haptic stimuli using a Gaussian mixture model to approximate the internal structure of the volumetric data. The user’s hand is tracked by a sensor and is represented in a virtual space. Users can touch volumetric data with virtual hands. The focal points of the ultrasound phased arrays for presenting the sense of touch are determined from the the position of the user’s hand and the contact point of the virtual hand on the volumetric data. These haptic cues allow the user to directly perceive the sensation of touching the inside of the volumetric data. Our proposal is a solution for the occlusion problem in volumetric data visualization.
在本文中,我们提出了一种利用空中触觉感知体积数据内部结构的方法。在这种方法中,我们使用高斯混合模型来呈现触觉刺激,以近似体积数据的内部结构。用户的手由传感器跟踪,并在虚拟空间中表示。用户可以用虚拟的手触摸体积数据。用于呈现触觉的超声相控阵的焦点由用户的手的位置和虚拟手在体积数据上的接触点确定。这些触觉线索允许用户直接感知触摸内部体积数据的感觉。本文提出了一种解决体数据可视化中遮挡问题的方法。
{"title":"Midair Haptic Representation for Internal Structure in Volumetric Data Visualization","authors":"T. Takashina, Mitsuru Ito, Yuji Kokumai","doi":"10.1145/3355056.3364584","DOIUrl":"https://doi.org/10.1145/3355056.3364584","url":null,"abstract":"In this paper, we propose a method to perceive the internal structure of volumetric data using midair haptics. In this method, we render haptic stimuli using a Gaussian mixture model to approximate the internal structure of the volumetric data. The user’s hand is tracked by a sensor and is represented in a virtual space. Users can touch volumetric data with virtual hands. The focal points of the ultrasound phased arrays for presenting the sense of touch are determined from the the position of the user’s hand and the contact point of the virtual hand on the volumetric data. These haptic cues allow the user to directly perceive the sensation of touching the inside of the volumetric data. Our proposal is a solution for the occlusion problem in volumetric data visualization.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123275476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching 一种鲁棒非刚性形状匹配的小波能量分解特征
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364556
Yiqun Wang, Jianwei Guo, Dongming Yan, Xiaopeng Zhang
We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with different resolutions. The local shape descriptors are generated by decomposing Dirichlet energy on the input triangular mesh. Our approach can be either applied directly or used as the input to other learning-based approaches. Experimental results show that the proposed WEDS achieves promising results on shape matching tasks in terms of incompatible shape structures.
提出了一种新的局部形状描述子——小波能量分解特征,用于鲁棒匹配不同分辨率的非刚性三维形状。通过分解输入三角网格上的狄利克雷能量生成局部形状描述子。我们的方法既可以直接应用,也可以作为其他基于学习的方法的输入。实验结果表明,该方法在不相容形状结构的形状匹配任务中取得了很好的效果。
{"title":"A Wavelet Energy Decomposition Signature for Robust Non-Rigid Shape Matching","authors":"Yiqun Wang, Jianwei Guo, Dongming Yan, Xiaopeng Zhang","doi":"10.1145/3355056.3364556","DOIUrl":"https://doi.org/10.1145/3355056.3364556","url":null,"abstract":"We present a novel local shape descriptor, named wavelet energy decomposition signature (WEDS) for robustly matching non-rigid 3D shapes with different resolutions. The local shape descriptors are generated by decomposing Dirichlet energy on the input triangular mesh. Our approach can be either applied directly or used as the input to other learning-based approaches. Experimental results show that the proposed WEDS achieves promising results on shape matching tasks in terms of incompatible shape structures.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123595085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-friendly Interior Design Recommendation 人性化室内设计建议
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364562
Akari Nishikawa, K. Ono, M. Miki
We propose a novel search engine that recommends a combination of furniture preferred by a user based on image features. In recent years, research on furniture search engines has attracted attention with the development of deep learning techniques. However, existing search engines mainly focus on the techniques of extracting similar furniture (items), and few studies have considered interior combinations. Even techniques that consider the combination do not take into account the preference of each user. They make recommendations based on the text data attached to the image and do not incorporate a judgmental mechanism based on differences in individual preference such as the shape and color of furniture. Thus, in this study, we propose a method that recommends items that match the selected item for each individual based on individual preference by analyzing images selected by the user and automatically creating a rule for a combination of furniture based on the proposed features.
我们提出了一种新颖的搜索引擎,根据图像特征推荐用户喜欢的家具组合。近年来,随着深度学习技术的发展,家具搜索引擎的研究备受关注。然而,现有的搜索引擎主要集中在提取相似家具(物品)的技术上,很少有研究考虑室内组合。即使是考虑组合的技术也没有考虑到每个用户的偏好。他们根据图片附带的文本数据提出建议,而不考虑基于个人偏好差异(如家具的形状和颜色)的判断机制。因此,在本研究中,我们提出了一种方法,通过分析用户选择的图像,根据个人偏好为每个人推荐与所选物品匹配的物品,并根据所建议的特征自动创建家具组合规则。
{"title":"User-friendly Interior Design Recommendation","authors":"Akari Nishikawa, K. Ono, M. Miki","doi":"10.1145/3355056.3364562","DOIUrl":"https://doi.org/10.1145/3355056.3364562","url":null,"abstract":"We propose a novel search engine that recommends a combination of furniture preferred by a user based on image features. In recent years, research on furniture search engines has attracted attention with the development of deep learning techniques. However, existing search engines mainly focus on the techniques of extracting similar furniture (items), and few studies have considered interior combinations. Even techniques that consider the combination do not take into account the preference of each user. They make recommendations based on the text data attached to the image and do not incorporate a judgmental mechanism based on differences in individual preference such as the shape and color of furniture. Thus, in this study, we propose a method that recommends items that match the selected item for each individual based on individual preference by analyzing images selected by the user and automatically creating a rule for a combination of furniture based on the proposed features.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123037234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fundus imaging using DCRA toward large eyebox 采用DCRA对大眼箱进行眼底成像
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364579
Yuichi Atarashi, Kazuki Otao, Takahito Aoto, Yoichi Ochiai
We propose a novel fundus imaging system using a dihedral corner reflector array (DCRA) that is an optical component to work as a lens but does not have a focal length or an optical axis. A DCRA has a feature that transfers a light source into a plane symmetric point. Conventionally, using this feature, a DCRA has been used to many display applications, such as virtual retinal display and three-dimensional display, in the field of computer graphics. On the other hand, as a sensing application, we use a DCRA for setting a virtual camera in/on an eyeball to capture a fundus. The proposed system has three features; (1) robust to eye movement, (2) wavelength-independent, (3) a simple optical system. In the experiments, the proposed system achieves 8 mm of large eyebox. The proposed system has a possibility to be applied to preventive medicine for households that can be used in daily life.
我们提出了一种新的眼底成像系统,使用二面角反射器阵列(DCRA),它是一个光学元件,作为一个透镜,但没有焦距或光轴。DCRA具有将光源传输到平面对称点的特性。传统上,DCRA利用这一特性已被用于计算机图形学领域的许多显示应用,如虚拟视网膜显示和三维显示。另一方面,作为传感应用,我们使用DCRA在眼球内/上设置虚拟摄像机来捕捉眼底。该系统具有三个特点;(1)对眼球运动具有鲁棒性;(2)与波长无关;(3)光学系统简单。在实验中,该系统实现了8mm的大眼箱。该系统有可能应用于家庭预防医学,并在日常生活中使用。
{"title":"Fundus imaging using DCRA toward large eyebox","authors":"Yuichi Atarashi, Kazuki Otao, Takahito Aoto, Yoichi Ochiai","doi":"10.1145/3355056.3364579","DOIUrl":"https://doi.org/10.1145/3355056.3364579","url":null,"abstract":"We propose a novel fundus imaging system using a dihedral corner reflector array (DCRA) that is an optical component to work as a lens but does not have a focal length or an optical axis. A DCRA has a feature that transfers a light source into a plane symmetric point. Conventionally, using this feature, a DCRA has been used to many display applications, such as virtual retinal display and three-dimensional display, in the field of computer graphics. On the other hand, as a sensing application, we use a DCRA for setting a virtual camera in/on an eyeball to capture a fundus. The proposed system has three features; (1) robust to eye movement, (2) wavelength-independent, (3) a simple optical system. In the experiments, the proposed system achieves 8 mm of large eyebox. The proposed system has a possibility to be applied to preventive medicine for households that can be used in daily life.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128275998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Eye-Tracking Based Adaptive Parallel Coordinates 基于自适应平行坐标的眼动追踪
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364563
Mohammad Chegini, K. Andrews, T. Schreck, A. Sourin
Parallel coordinates is a well-known technique for visual analysis of high-dimensional data. Although it is effective for interactive discovery of patterns in subsets of dimensions and data records, it also has scalability issues for large datasets. In particular, the amount of visual information potentially being shown in a parallel coordinates plot grows combinatorially with the number of dimensions. Choosing the right ordering of axes is crucial, and poor design can lead to visual noise and a cluttered plot. In this case, the user may overlook a significant pattern, or leave some dimensions unexplored. In this work, we demonstrate how eye-tracking can help an analyst efficiently and effectively reorder the axes in a parallel coordinates plot. Implicit input from an inexpensive eye-tracker assists the system in finding unexplored dimensions. Using this information, the system guides the user either visually or automatically to find further appropriate orderings of the axes.
平行坐标是一种著名的高维数据可视化分析技术。尽管它对于在维度子集和数据记录中交互式发现模式是有效的,但是对于大型数据集,它也存在可伸缩性问题。特别是,在平行坐标图中潜在显示的视觉信息的数量随着维数的增加而组合增长。选择正确的轴的顺序是至关重要的,糟糕的设计会导致视觉噪音和混乱的情节。在这种情况下,用户可能会忽略一个重要的模式,或者留下一些未探索的维度。在这项工作中,我们展示了眼动追踪如何帮助分析师有效地重新排列平行坐标图中的轴。来自廉价眼动仪的隐式输入帮助系统发现未探索的维度。利用这些信息,系统引导用户以视觉或自动方式找到进一步适当的轴的顺序。
{"title":"Eye-Tracking Based Adaptive Parallel Coordinates","authors":"Mohammad Chegini, K. Andrews, T. Schreck, A. Sourin","doi":"10.1145/3355056.3364563","DOIUrl":"https://doi.org/10.1145/3355056.3364563","url":null,"abstract":"Parallel coordinates is a well-known technique for visual analysis of high-dimensional data. Although it is effective for interactive discovery of patterns in subsets of dimensions and data records, it also has scalability issues for large datasets. In particular, the amount of visual information potentially being shown in a parallel coordinates plot grows combinatorially with the number of dimensions. Choosing the right ordering of axes is crucial, and poor design can lead to visual noise and a cluttered plot. In this case, the user may overlook a significant pattern, or leave some dimensions unexplored. In this work, we demonstrate how eye-tracking can help an analyst efficiently and effectively reorder the axes in a parallel coordinates plot. Implicit input from an inexpensive eye-tracker assists the system in finding unexplored dimensions. Using this information, the system guides the user either visually or automatically to find further appropriate orderings of the axes.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computational Spectral-Depth Imaging with a Compact System 计算光谱深度成像与一个紧凑的系统
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364570
Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, Xuejin Chen
In this paper, a compact imaging system is developed to enable simultaneous acquisition of the spectral and depth information in real-time with high resolution. We achieve this goal using only two commercial cameras and relying on an efficient computational reconstruction algorithm with deep learning. For the first time, this work allows 5D information (3D space + 1D spectrum + 1D time) of the target scene to be captured with a miniaturized apparatus and without active illumination.
本文开发了一种紧凑的成像系统,可以实时、高分辨率地同时获取光谱和深度信息。我们仅使用两台商用相机并依靠高效的深度学习计算重建算法实现了这一目标。这项工作首次允许在没有主动照明的情况下,用小型化的设备捕获目标场景的5D信息(3D空间+ 1D光谱+ 1D时间)。
{"title":"Computational Spectral-Depth Imaging with a Compact System","authors":"Mingde Yao, Zhiwei Xiong, Lizhi Wang, Dong Liu, Xuejin Chen","doi":"10.1145/3355056.3364570","DOIUrl":"https://doi.org/10.1145/3355056.3364570","url":null,"abstract":"In this paper, a compact imaging system is developed to enable simultaneous acquisition of the spectral and depth information in real-time with high resolution. We achieve this goal using only two commercial cameras and relying on an efficient computational reconstruction algorithm with deep learning. For the first time, this work allows 5D information (3D space + 1D spectrum + 1D time) of the target scene to be captured with a miniaturized apparatus and without active illumination.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131135068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Color-Based Edge Detection on Mesh Surface 基于颜色的网格表面边缘检测
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364580
Yi-Jheng Huang
1 ABSTRACT Edge detection is one of the fundamental techniques and can be applied in many places. We propose an algorithm for detecting edges based on the color of a mesh surface. To the best of our knowledge, we are the first to detect edges on mesh surface based on the color of the mesh surface. The basic idea of our method is to compute color gradient magnitudes of a mesh surface. To do that, the mesh is split along the intersection of surfaces into some segments. Then, segments are voxelized and assigned a representative color by averaging colors at boundaries between voxels and mesh faces. Artificial neighbors are created for completeness and 3D canny edge detection is applied to the resulting 3D representation. Lastly, additional intersections are added by looking at the intersection of two surfaces. Figure 1 shows the results of our method.
边缘检测是一种基础技术,可以应用于很多地方。提出了一种基于网格表面颜色的边缘检测算法。据我们所知,我们是第一个根据网格表面的颜色检测网格表面边缘的人。该方法的基本思想是计算网格表面的颜色梯度大小。为了做到这一点,网格沿着曲面的交叉点被分割成一些片段。然后,分割体素化,并通过平均体素和网格面之间边界的颜色来分配代表性颜色。为完整性创建人工邻居,并将3D精细边缘检测应用于生成的3D表示。最后,通过查看两个曲面的相交来添加额外的相交。图1显示了我们的方法的结果。
{"title":"Color-Based Edge Detection on Mesh Surface","authors":"Yi-Jheng Huang","doi":"10.1145/3355056.3364580","DOIUrl":"https://doi.org/10.1145/3355056.3364580","url":null,"abstract":"1 ABSTRACT Edge detection is one of the fundamental techniques and can be applied in many places. We propose an algorithm for detecting edges based on the color of a mesh surface. To the best of our knowledge, we are the first to detect edges on mesh surface based on the color of the mesh surface. The basic idea of our method is to compute color gradient magnitudes of a mesh surface. To do that, the mesh is split along the intersection of surfaces into some segments. Then, segments are voxelized and assigned a representative color by averaging colors at boundaries between voxels and mesh faces. Artificial neighbors are created for completeness and 3D canny edge detection is applied to the resulting 3D representation. Lastly, additional intersections are added by looking at the intersection of two surfaces. Figure 1 shows the results of our method.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116919650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Method to Create Fluttering Hair Animations That Can Reproduce Animator’s Techniques 一种方法来创建飘扬的头发动画,可以重现动画师的技术
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364582
Naoaki Kataoka, Tomokazu Ishikawa, I. Matsuda
We propose a method based on an animator’s technique to create an animation for fluttering objects in the wind such as hair and flag. As a preliminary study, we analyzed how the fluttering objects were expressed in the hand-drawn animations, and confirmed that there is a traditional technique commonly used by professional animators. In the case of hair, for example, the tip of the hair is often moved in the shape of a figure eight, and the remaining hair bundle is animated as if a wave caused by this movement is propagating along the hair. Based on this observation, we developed a system to reproduce this technique digitally. In this system, trajectories of a few control points on a hair bone are sketched by a user, and their motion are propagated to the whole hair bundle to represent the waving behavior. In this process, user can interactively adjust two parameters on swing speed and wave propagation delay. As system evaluation, we conducted a user test in which a sample animation was reproduced by several subjects using our system.
我们提出了一种基于动画师技术的方法来为风中飘动的物体(如头发和旗帜)创建动画。作为初步研究,我们分析了手绘动画中飘动的物体是如何表达的,并证实了这是一种专业动画师常用的传统技术。以头发为例,头发的尖端经常被移动成8字形,剩下的头发束就像这种运动引起的波浪一样沿着头发传播。基于这一观察,我们开发了一个系统,以数字方式再现这一技术。在该系统中,用户绘制头发骨骼上几个控制点的轨迹,并将其运动传播到整个头发束,以表示波浪行为。在此过程中,用户可以交互调节摆动速度和波传播延迟两个参数。作为系统评估,我们进行了一个用户测试,其中几个受试者使用我们的系统复制了一个示例动画。
{"title":"A Method to Create Fluttering Hair Animations That Can Reproduce Animator’s Techniques","authors":"Naoaki Kataoka, Tomokazu Ishikawa, I. Matsuda","doi":"10.1145/3355056.3364582","DOIUrl":"https://doi.org/10.1145/3355056.3364582","url":null,"abstract":"We propose a method based on an animator’s technique to create an animation for fluttering objects in the wind such as hair and flag. As a preliminary study, we analyzed how the fluttering objects were expressed in the hand-drawn animations, and confirmed that there is a traditional technique commonly used by professional animators. In the case of hair, for example, the tip of the hair is often moved in the shape of a figure eight, and the remaining hair bundle is animated as if a wave caused by this movement is propagating along the hair. Based on this observation, we developed a system to reproduce this technique digitally. In this system, trajectories of a few control points on a hair bone are sketched by a user, and their motion are propagated to the whole hair bundle to represent the waving behavior. In this process, user can interactively adjust two parameters on swing speed and wave propagation delay. As system evaluation, we conducted a user test in which a sample animation was reproduced by several subjects using our system.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2019 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1