首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
Real-time 3D face super-resolution from monocular in-the-wild videos 实时3D面部超分辨率从单目野外视频
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945145
P. Huber, W. Christmas, A. Hilton, J. Kittler, Matthias Rätsch
We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. We use a 3D Morphable Face Model to obtain a semi-dense shape and combine it with a fast median-based super-resolution technique to obtain a high-fidelity textured 3D face model. Our system does not need prior training and is designed to work in uncontrolled scenarios.
我们提出了一种全自动的方法,从单目野外视频实时三维人脸重建。我们使用3D变形人脸模型获得半密集形状,并将其与快速基于中值的超分辨率技术相结合,获得高保真的纹理3D人脸模型。我们的系统不需要事先训练,设计用于在不受控制的情况下工作。
{"title":"Real-time 3D face super-resolution from monocular in-the-wild videos","authors":"P. Huber, W. Christmas, A. Hilton, J. Kittler, Matthias Rätsch","doi":"10.1145/2945078.2945145","DOIUrl":"https://doi.org/10.1145/2945078.2945145","url":null,"abstract":"We present a fully automatic approach to real-time 3D face reconstruction from monocular in-the-wild videos. We use a 3D Morphable Face Model to obtain a semi-dense shape and combine it with a fast median-based super-resolution technique to obtain a high-fidelity textured 3D face model. Our system does not need prior training and is designed to work in uncontrolled scenarios.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122771897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VisLoiter: a system to visualize loiterers discovered from surveillance videos VisLoiter:一个将监控视频中发现的游手好闲者可视化的系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945125
Jianquan Liu, Shoji Nishimura, Takuya Araki
This paper presents a system for visualizing the results of loitering discovery in surveillance videos. Since loitering is a suspicious behaviour that often leads to abnormal situations, such as pickpocketing, its analysis attracts attention from researchers [Bird et al. 2005; Ke et al. 2013; A. et al. 2015]. Most of them mainly focus on how to detect or identify loitering individuals by human tracking techniques. A robust approach in [Nam 2015] is one of the state-of-theart methods for detecting loitering persons in crowded scenes using pedestrian tracking based on spatio-temporal changes. However, such tracking-based methods are quite time-consuming. Therefore, it is hard to apply loitering detection across multiple cameras for a long time, or take into account the visualization of loiterers at a glance. To solve this problem, we propose a system, named VisLoiter (Figure 1), which enables efficient loitering discovery based on face features extracted from longtime videos across multiple cameras, instead of the tracking-based manner. By taking the advantage of efficiency, the VisLoiter realizes the visualization of loiterers at a glance. The visualization consists of three display components for (1) the appearance patterns of loitering individuals, (2) the frequency ranking of faces of loiterers, and (3) the lightweight playback of video clips where the discovered loiterer frequently appeared (see Figure 1 (b) and (c)).
本文介绍了一种监控视频中游荡发现结果的可视化系统。由于徘徊是一种可疑的行为,经常会导致异常情况,例如扒窃,因此对它的分析引起了研究人员的注意[Bird et al. 2005;Ke et al. 2013;A. et al. 2015]。大多数研究主要集中在如何通过人体跟踪技术来检测或识别游荡的个体。[Nam 2015]中的一种鲁棒方法是利用基于时空变化的行人跟踪来检测拥挤场景中游荡者的最新方法之一。然而,这种基于跟踪的方法非常耗时。因此,很难在长时间内跨多个摄像头进行游荡检测,也很难考虑到对游荡者的可视化。为了解决这个问题,我们提出了一个名为VisLoiter的系统(图1),该系统可以基于从多个摄像头的长时间视频中提取的面部特征,而不是基于跟踪的方式,实现高效的闲逛发现。VisLoiter利用效率的优势,实现了对游行者的可视化。可视化包括三个显示组件:(1)游荡个体的外观模式,(2)游荡者面孔的频率排名,以及(3)发现游荡者经常出现的视频片段的轻量播放(见图1 (b)和(c))。
{"title":"VisLoiter: a system to visualize loiterers discovered from surveillance videos","authors":"Jianquan Liu, Shoji Nishimura, Takuya Araki","doi":"10.1145/2945078.2945125","DOIUrl":"https://doi.org/10.1145/2945078.2945125","url":null,"abstract":"This paper presents a system for visualizing the results of loitering discovery in surveillance videos. Since loitering is a suspicious behaviour that often leads to abnormal situations, such as pickpocketing, its analysis attracts attention from researchers [Bird et al. 2005; Ke et al. 2013; A. et al. 2015]. Most of them mainly focus on how to detect or identify loitering individuals by human tracking techniques. A robust approach in [Nam 2015] is one of the state-of-theart methods for detecting loitering persons in crowded scenes using pedestrian tracking based on spatio-temporal changes. However, such tracking-based methods are quite time-consuming. Therefore, it is hard to apply loitering detection across multiple cameras for a long time, or take into account the visualization of loiterers at a glance. To solve this problem, we propose a system, named VisLoiter (Figure 1), which enables efficient loitering discovery based on face features extracted from longtime videos across multiple cameras, instead of the tracking-based manner. By taking the advantage of efficiency, the VisLoiter realizes the visualization of loiterers at a glance. The visualization consists of three display components for (1) the appearance patterns of loitering individuals, (2) the frequency ranking of faces of loiterers, and (3) the lightweight playback of video clips where the discovered loiterer frequently appeared (see Figure 1 (b) and (c)).","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A modified wheatstone-style head-mounted display prototype for narrow field-of-view video see-through augmented reality 改进的wheatstone风格的头戴式显示器原型窄视场视频透视增强现实
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945084
Pei-Hsuan Tsai, Yu-Hsuan Huang, Yu-Ju Tsai, Hao-Yu Chang, Masatoshi Chang-Ogimoto, M. Ouhyoung
Users always got bad experiences while using the general virtual reality head-mounted displays (HMDs) because of the low pixel density through optical lenses. For this reason, the narrow field-of-view (FoV) and high pixel density are the main goals we are going to pursue in the near-field video see-through augmented reality (AR) applications with sophisticated operations, such as the biological observation with AR microscope (e.g. Scope+ [Huang et al. 2015]), the AR surgery simulation, and telescope applications. Therefore with high resolution to see tiny objects clearly is the most important concern in this paper.
一般的虚拟现实头戴式显示器(hmd)由于光学透镜的像素密度低,用户在使用时往往会出现不良体验。因此,窄视场(FoV)和高像素密度是我们在具有复杂操作的近场视频透明增强现实(AR)应用中要追求的主要目标,例如使用AR显微镜进行生物观察(例如Scope+ [Huang et al. 2015]), AR手术模拟和望远镜应用。因此,如何以高分辨率清晰地观察微小物体是本文研究的重点。
{"title":"A modified wheatstone-style head-mounted display prototype for narrow field-of-view video see-through augmented reality","authors":"Pei-Hsuan Tsai, Yu-Hsuan Huang, Yu-Ju Tsai, Hao-Yu Chang, Masatoshi Chang-Ogimoto, M. Ouhyoung","doi":"10.1145/2945078.2945084","DOIUrl":"https://doi.org/10.1145/2945078.2945084","url":null,"abstract":"Users always got bad experiences while using the general virtual reality head-mounted displays (HMDs) because of the low pixel density through optical lenses. For this reason, the narrow field-of-view (FoV) and high pixel density are the main goals we are going to pursue in the near-field video see-through augmented reality (AR) applications with sophisticated operations, such as the biological observation with AR microscope (e.g. Scope+ [Huang et al. 2015]), the AR surgery simulation, and telescope applications. Therefore with high resolution to see tiny objects clearly is the most important concern in this paper.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128065738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARTTag: aesthetic fiducial markers based on circle pairs ARTTag:基于圆对的美学基准标记
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945116
Shinichi Higashino, Sakiko Nishi, R. Sakamoto
In this paper, we present ARTTag, an aesthetic fiducial marker system, of which the design development can be performed with any color, texture, shape, or other features as long as circle pairs are integrated. By utilizing the projective properties of circular features, ARTTag is appropriate for detection, identification, and camera-based registration in augmented reality (AR) applications.
在本文中,我们提出了ARTTag,一个美学基准标记系统,只要结合圆对,就可以使用任何颜色,纹理,形状或其他特征进行设计开发。通过利用圆形特征的投影特性,ARTTag适用于增强现实(AR)应用中的检测、识别和基于相机的注册。
{"title":"ARTTag: aesthetic fiducial markers based on circle pairs","authors":"Shinichi Higashino, Sakiko Nishi, R. Sakamoto","doi":"10.1145/2945078.2945116","DOIUrl":"https://doi.org/10.1145/2945078.2945116","url":null,"abstract":"In this paper, we present ARTTag, an aesthetic fiducial marker system, of which the design development can be performed with any color, texture, shape, or other features as long as circle pairs are integrated. By utilizing the projective properties of circular features, ARTTag is appropriate for detection, identification, and camera-based registration in augmented reality (AR) applications.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126743441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Example-based data optimization for facial simulation 基于实例的面部模拟数据优化
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945151
Sara C. Schvartzman, M. Romeo
Digital characters are common in modern films visual effects and the demand for digital actors has increased during the past few years. The success of digitally created actors is related to their believability and, in particular, the realism of the animation and simulation of their faces. Facial expressions in computer graphics are commonly obtained through linear vertex interpolation techniques such as blend shapes. These enable high artistic control and fast interaction, but cannot properly reproduce collisions or other physical phenomena such as gravity and inertia. These effects can be achieved by applying simulation techniques over the animated facial geometry (e.g. muscle simulation), but could potentially alter the look of the desired facial expression and produce inconsistencies with the work approved in animation. Moreover, animating such muscle rigs can be very cumbersome.
数字角色在现代电影视觉效果中很常见,在过去的几年里,对数字演员的需求也在增加。数字创造的演员的成功与他们的可信度有关,特别是他们的面部动画和模拟的真实感。计算机图形学中的面部表情通常是通过混合形状等线性顶点插值技术获得的。这能够实现高度的艺术控制和快速互动,但不能正确地再现碰撞或其他物理现象,如重力和惯性。这些效果可以通过在动画面部几何(例如肌肉模拟)上应用模拟技术来实现,但可能会潜在地改变期望的面部表情的外观,并产生与动画中批准的工作不一致。此外,动画这样的肌肉钻机可能是非常麻烦的。
{"title":"Example-based data optimization for facial simulation","authors":"Sara C. Schvartzman, M. Romeo","doi":"10.1145/2945078.2945151","DOIUrl":"https://doi.org/10.1145/2945078.2945151","url":null,"abstract":"Digital characters are common in modern films visual effects and the demand for digital actors has increased during the past few years. The success of digitally created actors is related to their believability and, in particular, the realism of the animation and simulation of their faces. Facial expressions in computer graphics are commonly obtained through linear vertex interpolation techniques such as blend shapes. These enable high artistic control and fast interaction, but cannot properly reproduce collisions or other physical phenomena such as gravity and inertia. These effects can be achieved by applying simulation techniques over the animated facial geometry (e.g. muscle simulation), but could potentially alter the look of the desired facial expression and produce inconsistencies with the work approved in animation. Moreover, animating such muscle rigs can be very cumbersome.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134194552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VRCEMIG: a novel approach to power substation control VRCEMIG:一种变电站控制的新方法
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945081
Alexandre Cardoso, E. Lamounier, G. Lima, Paulo do Prado, J. N. Ferreira
In this work, we propose a Virtual Reality based solution to provide a more natural and intuitive environment for controlling electrical operation centers. The research is being carried out with the collaboration of one electric company called Cemig. The novelty of this approach is the ability operators will have to manage the electric system and its electric components by being immersed within a 3D world, reflecting the very true arrangement found in the real electrical substation. Besides, the solution has been designed in a way to provide the operator with all supervisory data in the same virtual environment. We have conducted experiments with the electric company operators Mental efforts to understand the reality of the field have been reduced, according to Cemig's employees. They also claim that a unique environment with all data integrated is very important for taking engineering decisions.
在这项工作中,我们提出了一个基于虚拟现实的解决方案,为电气操作中心的控制提供一个更自然、更直观的环境。这项研究是与一家名为Cemig的电力公司合作进行的。这种方法的新颖之处在于,操作员将有能力通过沉浸在3D世界中来管理电气系统及其电气部件,反映出真实变电站中非常真实的布置。此外,该解决方案的设计方式是在同一虚拟环境中为操作人员提供所有监控数据。我们已经和电力公司的操作人员进行了实验,据Cemig的员工说,他们已经减少了理解现场现实的心理努力。他们还声称,集成所有数据的独特环境对于做出工程决策非常重要。
{"title":"VRCEMIG: a novel approach to power substation control","authors":"Alexandre Cardoso, E. Lamounier, G. Lima, Paulo do Prado, J. N. Ferreira","doi":"10.1145/2945078.2945081","DOIUrl":"https://doi.org/10.1145/2945078.2945081","url":null,"abstract":"In this work, we propose a Virtual Reality based solution to provide a more natural and intuitive environment for controlling electrical operation centers. The research is being carried out with the collaboration of one electric company called Cemig. The novelty of this approach is the ability operators will have to manage the electric system and its electric components by being immersed within a 3D world, reflecting the very true arrangement found in the real electrical substation. Besides, the solution has been designed in a way to provide the operator with all supervisory data in the same virtual environment. We have conducted experiments with the electric company operators Mental efforts to understand the reality of the field have been reduced, according to Cemig's employees. They also claim that a unique environment with all data integrated is very important for taking engineering decisions.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129716246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Cross-field haptics: push-pull haptics combined with magnetic and electrostatic fields 交叉场触觉:结合磁场和静电场的推挽触觉
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945108
Satoshi Hashizume, Kazuki Takazawa, Amy Koike, Yoichi Ochiai
The representation of texture is a major concern during fabrication and manufacturing in many industries. Thus, the approach for fabricating everyday objects and the digital expression of their textures before fabrication process has become a popular research area. Although it is easy to change the texture of objects in the digital world (i.e. just setting texture parameters), it is difficult to achieve this in the real world.
纹理的表示是许多行业在制造和制造过程中关注的主要问题。因此,日常物品的制造方法及其纹理在制造过程前的数字化表达已成为一个热门的研究领域。虽然在数字世界中很容易改变物体的纹理(即只需设置纹理参数),但在现实世界中很难实现这一点。
{"title":"Cross-field haptics: push-pull haptics combined with magnetic and electrostatic fields","authors":"Satoshi Hashizume, Kazuki Takazawa, Amy Koike, Yoichi Ochiai","doi":"10.1145/2945078.2945108","DOIUrl":"https://doi.org/10.1145/2945078.2945108","url":null,"abstract":"The representation of texture is a major concern during fabrication and manufacturing in many industries. Thus, the approach for fabricating everyday objects and the digital expression of their textures before fabrication process has become a popular research area. Although it is easy to change the texture of objects in the digital world (i.e. just setting texture parameters), it is difficult to achieve this in the real world.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124204182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Dynamic frame rate: a study on viewer perception of changes in frame rate within an animated movie sequence 动态帧率:研究观众对动画电影序列中帧率变化的感知
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945159
K. Chuang
Dynamic Frame Rate (DFR) is the change in frame rate of a movie sequence in real time as the sequence is playing. Throughout the majority of the past century and after the introduction of sound in films, frame rates used in films have been kept at a standardization of 24 frame per second despite technological advancement [Salmon et. Al 2011]. In the past decade, spatial resolution has been increasing in display systems while the temporal resolution, the frame rate, has not been changed. Because of this, researchers and filmmakers stress that motion judders and blurriness are much more apparent and they propose that high frame rates will solve the issue [Emoto et. Al 2014] [Turnock 2013]. Some industry experts and critics, however, oppose the use of high frame rates [Wilcox 2015]. Despite all the research and attempts in using high frame rate, the idea of using dynamic frame rate in digital cinema has not been explored in depth. As such, there is very limited information on how people perceive DFR and how it actually works. By understanding DFR and how viewers perceive the changes in frame rate, it will help us adapt new techniques in the creation of cinema. We can utilize high frame rate in sequences that could benefit from high frame rate while keeping the rest of the sequences at standard frame rate. This thesis aims to understand the basics of DFR, how different implementations of DFR changes viewer perception and how people perceive a change of frame rate in an animated movie sequence displayed.
动态帧率(DFR)是指电影序列在播放过程中帧率的实时变化。在过去一个世纪的大部分时间里,在电影中引入声音之后,尽管技术进步,但电影中使用的帧率一直保持在每秒24帧的标准化[Salmon等人2011]。在过去的十年中,显示系统的空间分辨率一直在提高,而时间分辨率,即帧率却没有改变。正因为如此,研究人员和电影制作人强调运动抖动和模糊更加明显,他们提出高帧率将解决这个问题[Emoto et. Al 2014] [turnck 2013]。然而,一些行业专家和评论家反对使用高帧率[Wilcox 2015]。尽管在使用高帧率方面进行了很多研究和尝试,但是在数字电影中使用动态帧率的想法还没有得到深入的探讨。因此,关于人们如何看待DFR及其实际工作原理的信息非常有限。通过了解DFR以及观众如何感知帧率的变化,它将帮助我们在电影创作中适应新的技术。我们可以在序列中利用高帧率,从而受益于高帧率,同时保持序列的其余部分处于标准帧率。本文旨在了解DFR的基础知识,DFR的不同实现如何改变观众的感知,以及人们如何感知动画电影序列中显示的帧率变化。
{"title":"Dynamic frame rate: a study on viewer perception of changes in frame rate within an animated movie sequence","authors":"K. Chuang","doi":"10.1145/2945078.2945159","DOIUrl":"https://doi.org/10.1145/2945078.2945159","url":null,"abstract":"Dynamic Frame Rate (DFR) is the change in frame rate of a movie sequence in real time as the sequence is playing. Throughout the majority of the past century and after the introduction of sound in films, frame rates used in films have been kept at a standardization of 24 frame per second despite technological advancement [Salmon et. Al 2011]. In the past decade, spatial resolution has been increasing in display systems while the temporal resolution, the frame rate, has not been changed. Because of this, researchers and filmmakers stress that motion judders and blurriness are much more apparent and they propose that high frame rates will solve the issue [Emoto et. Al 2014] [Turnock 2013]. Some industry experts and critics, however, oppose the use of high frame rates [Wilcox 2015]. Despite all the research and attempts in using high frame rate, the idea of using dynamic frame rate in digital cinema has not been explored in depth. As such, there is very limited information on how people perceive DFR and how it actually works. By understanding DFR and how viewers perceive the changes in frame rate, it will help us adapt new techniques in the creation of cinema. We can utilize high frame rate in sequences that could benefit from high frame rate while keeping the rest of the sequences at standard frame rate. This thesis aims to understand the basics of DFR, how different implementations of DFR changes viewer perception and how people perceive a change of frame rate in an animated movie sequence displayed.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125133721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relation-based parametrization and exploration of shape collections 基于关系的形状集合参数化与探索
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945112
Kurt Leimer, M. Wimmer, Przemyslaw Musialski
With online repositories for 3D models like 3D Warehouse becoming more prevalent and growing ever larger, new possibilities have opened up for both experienced and inexperienced users alike. These large collections of shapes can provide inspiration for designers or make it possible to synthesize new shapes by combining different parts from already existing shapes, which can be both easy to learn and a fast way of creating new shapes.
随着3D模型的在线存储库(如3D Warehouse)变得越来越普遍和越来越大,有经验和没有经验的用户都有了新的可能性。这些大量的形状集合可以为设计师提供灵感,也可以通过将现有形状的不同部分组合在一起来合成新形状,这既容易学习,又能快速创建新形状。
{"title":"Relation-based parametrization and exploration of shape collections","authors":"Kurt Leimer, M. Wimmer, Przemyslaw Musialski","doi":"10.1145/2945078.2945112","DOIUrl":"https://doi.org/10.1145/2945078.2945112","url":null,"abstract":"With online repositories for 3D models like 3D Warehouse becoming more prevalent and growing ever larger, new possibilities have opened up for both experienced and inexperienced users alike. These large collections of shapes can provide inspiration for designers or make it possible to synthesize new shapes by combining different parts from already existing shapes, which can be both easy to learn and a fast way of creating new shapes.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116354037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The need for interdisciplinary undergraduate research 需要跨学科的本科生研究
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945137
W. Joel
In 2009, the ACM/SIGGRAPH Education Committee established an Undergraduate Research Alliance [Undergraduate Research Alliance] to foster the development of undergraduate research, in computer graphics and interactive techniques, across all related disciplines. Since its inception, the Alliance has hosted sessions at the annual SIGGRAPH conferences to allow educators and other the chance to discuss what they have accomplished and what still needs to be done. If we in the SIGGRAPH community wish to continue to expand the envelope of knowledge, it is necessary that we engage students in the exploration of new ideas as early as possible in their education. The purpose of this poster, therefore, is to present a case study for undergraduate research with the hopes that it spurs others to join in this endeavor.
2009年,ACM/SIGGRAPH教育委员会建立了一个本科研究联盟(本科研究联盟),以促进计算机图形学和交互技术在所有相关学科的本科研究发展。自成立以来,该联盟每年都会在SIGGRAPH会议上举办会议,让教育工作者和其他人士有机会讨论他们已经取得的成就和仍然需要做的事情。如果我们在SIGGRAPH社区希望继续扩大知识的范围,我们有必要让学生在他们的教育中尽早探索新思想。因此,这张海报的目的是为本科生研究提供一个案例研究,希望它能激励其他人加入这一努力。
{"title":"The need for interdisciplinary undergraduate research","authors":"W. Joel","doi":"10.1145/2945078.2945137","DOIUrl":"https://doi.org/10.1145/2945078.2945137","url":null,"abstract":"In 2009, the ACM/SIGGRAPH Education Committee established an Undergraduate Research Alliance [Undergraduate Research Alliance] to foster the development of undergraduate research, in computer graphics and interactive techniques, across all related disciplines. Since its inception, the Alliance has hosted sessions at the annual SIGGRAPH conferences to allow educators and other the chance to discuss what they have accomplished and what still needs to be done. If we in the SIGGRAPH community wish to continue to expand the envelope of knowledge, it is necessary that we engage students in the exploration of new ideas as early as possible in their education. The purpose of this poster, therefore, is to present a case study for undergraduate research with the hopes that it spurs others to join in this endeavor.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121216793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1