首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
Multiwave: Complex Hand Gesture Recognition Using the Doppler Effect 基于多普勒效应的多波复杂手势识别
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.13
Corey R. Pittman, J. Laviola
We built an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of a speaker and microphone to be operational, but can be augmented with more speakers. Since these components are already included in most end user systems, our design makes gesture-based input more accessible to a wider range of end users. We are able to detect complex gestures by generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves. We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report error rates comparable to or better than the current state of the art despite additional complexity. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional acoustic gesture recognition.
我们建立了一个声学的,基于手势的识别系统,叫做Multiwave,它利用多普勒效应将多维运动转化为用户界面命令。我们的系统只需要使用扬声器和麦克风就可以操作,但可以增加更多的扬声器。由于这些组件已经包含在大多数终端用户系统中,我们的设计使基于手势的输入更容易被更广泛的终端用户访问。我们能够通过从多个扬声器中产生已知的高频音调来检测复杂的手势,并通过声波的变化来检测运动。我们展示了对Multiwave的用户研究结果,以评估不同手势的识别率,并报告与当前技术相当或更好的错误率,尽管有额外的复杂性。我们还报告了主观用户反馈和从我们的系统中吸取的一些经验教训,为多维声学手势识别的未来应用提供了额外的见解。
{"title":"Multiwave: Complex Hand Gesture Recognition Using the Doppler Effect","authors":"Corey R. Pittman, J. Laviola","doi":"10.20380/GI2017.13","DOIUrl":"https://doi.org/10.20380/GI2017.13","url":null,"abstract":"We built an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of a speaker and microphone to be operational, but can be augmented with more speakers. Since these components are already included in most end user systems, our design makes gesture-based input more accessible to a wider range of end users. We are able to detect complex gestures by generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves. \u0000 \u0000We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report error rates comparable to or better than the current state of the art despite additional complexity. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional acoustic gesture recognition.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"97-106"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46622632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Tell Me More! Soliciting Reader Contributions to Software Tutorials 告诉我更多!征求读者对软件教程的贡献
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.03
P. Dubois, Volodymyr Dziubak, Andrea Bunt
Online software tutorials help a wide range of users acquireskills with complex software, but are not always easy to follow.For example, a tutorial might target users with a high skill level,or it might contain errors and omissions. Prior work has shownthat user contributions, such as user comments, can add value to atutorial. Building on this prior work, we investigate an approachto soliciting structured tutorial enhancements from tutorialreaders. We illustrate this approach through a prototype calledAntorial, and evaluate its impact on reader contributions through amulti-session study with 13 participants. Our findings suggest thatscaffolding tutorial contributions has positive impacts on both thenumber and type of reader contributions. Our findings also pointto design considerations for systems that aim to supportcommunity-based tutorial refinement, and suggest promisingdirections for future research.
在线软件教程帮助广大用户掌握复杂软件的技能,但并不总是容易遵循。例如,教程可能针对具有高技能水平的用户,或者它可能包含错误和遗漏。先前的工作表明,用户的贡献,如用户评论,可以增加教程的价值。在之前工作的基础上,我们研究了一种从教程读者那里征求结构化教程增强的方法。我们通过一个名为antleorial的原型来说明这种方法,并通过13个参与者的多会话研究来评估其对读者贡献的影响。我们的研究结果表明,脚手架教程的贡献对读者贡献的数量和类型都有积极的影响。我们的研究结果还指出了旨在支持基于社区的教程改进的系统的设计考虑,并为未来的研究提出了有希望的方向。
{"title":"Tell Me More! Soliciting Reader Contributions to Software Tutorials","authors":"P. Dubois, Volodymyr Dziubak, Andrea Bunt","doi":"10.20380/GI2017.03","DOIUrl":"https://doi.org/10.20380/GI2017.03","url":null,"abstract":"Online software tutorials help a wide range of users acquireskills with complex software, but are not always easy to follow.For example, a tutorial might target users with a high skill level,or it might contain errors and omissions. Prior work has shownthat user contributions, such as user comments, can add value to atutorial. Building on this prior work, we investigate an approachto soliciting structured tutorial enhancements from tutorialreaders. We illustrate this approach through a prototype calledAntorial, and evaluate its impact on reader contributions through amulti-session study with 13 participants. Our findings suggest thatscaffolding tutorial contributions has positive impacts on both thenumber and type of reader contributions. Our findings also pointto design considerations for systems that aim to supportcommunity-based tutorial refinement, and suggest promisingdirections for future research.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"16-23"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46434924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Ballistic Shadow Art 弹道阴影艺术
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.24
Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry
We present a framework for generating animated shadow art using occluders under ballistic motion. We apply a stochastic optimization to find the parameters of a multi-body physics simulation that produce a desired shadow at a specific instant in time. We perform simulations across many different initial conditions, applying a set of carefully crafted energy functions to evaluate the motion trajectory and multi-body shadows. We select the optimal parameters, resulting in a ballistics simulation that produces ephemeral shadow art. Users can design physically-plausible dynamic artwork that would be extremely challenging if even possible to achieve manually. We present and analyze number of compelling examples.
我们提出了一个框架,用于生成动画阴影艺术在弹道运动下使用闭塞。我们应用随机优化来找到在特定时刻产生所需阴影的多体物理模拟的参数。我们在许多不同的初始条件下进行模拟,应用一组精心制作的能量函数来评估运动轨迹和多体阴影。我们选择最佳参数,从而产生弹道学模拟,产生短暂的阴影艺术。用户可以设计物理上合理的动态艺术作品,这将是极具挑战性的,即使可能手动实现。我们提出并分析了一些令人信服的例子。
{"title":"Ballistic Shadow Art","authors":"Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry","doi":"10.20380/GI2017.24","DOIUrl":"https://doi.org/10.20380/GI2017.24","url":null,"abstract":"We present a framework for generating animated shadow art using occluders under ballistic motion. We apply a stochastic optimization to find the parameters of a multi-body physics simulation that produce a desired shadow at a specific instant in time. We perform simulations across many different initial conditions, applying a set of carefully crafted energy functions to evaluate the motion trajectory and multi-body shadows. We select the optimal parameters, resulting in a ballistics simulation that produces ephemeral shadow art. Users can design physically-plausible dynamic artwork that would be extremely challenging if even possible to achieve manually. We present and analyze number of compelling examples.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"190-198"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44778799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Revectorization-Based Accurate Soft Shadow using Adaptive Area Light Source Sampling 基于自适应区域光源采样的反射精确软阴影
Pub Date : 2017-01-06 DOI: 10.20380/GI2017.23
Márcio C. F. Macedo, A. Apolinario
Physically-based accurate soft shadows are typically computed by the evaluation of a visibility function over several point light sources which approximate an area light source. This visibility evaluation is computationally expensive for hundreds of light source samples, providing performance far from real-time. One solution to reduce the computational cost of the visibility evaluation is to adaptively reduce the number of samples required to generate accurate soft shadows. Unfortunately, adaptive area light source sampling is prone to temporal incoherence, generation of banding artifacts and is slower than uniform sampling in some scene configurations. In this paper, we aim to solve these problems by the proposition of a revectorization-based accurate soft shadow algorithm. We take advantage of the improved accuracy obtained with the shadow revectorization to generate accurate soft shadows from a few light source samples, while producing temporally coherent soft shadows at interactive frame rates. Also, we propose an algorithm which restricts the costly accurate soft shadow evaluation for penumbra fragments only. The results obtained show that our approach is, in general, faster than the uniform sampling approach and is more accurate than the real-time soft shadow algorithms.
基于物理的精确软阴影通常是通过评估几个点光源的可见性函数来计算的,这些点光源近似于一个区域光源。对于数百个光源样本,这种可见性评估在计算上是昂贵的,提供的性能远远不是实时的。降低可视性评估计算成本的一种解决方案是自适应地减少生成精确软阴影所需的样本数量。不幸的是,自适应区域光源采样容易出现时间不相干,产生带状伪影,并且在某些场景配置中比均匀采样慢。为了解决这些问题,本文提出了一种基于反向的精确软阴影算法。我们利用阴影反射化所获得的精度提高,从少数光源样本中生成准确的软阴影,同时在交互帧率下产生时间相干的软阴影。此外,我们还提出了一种算法,将昂贵的精确软阴影评估限制在半影碎片上。结果表明,该方法总体上比均匀采样方法更快,比实时软阴影算法更准确。
{"title":"Revectorization-Based Accurate Soft Shadow using Adaptive Area Light Source Sampling","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2017.23","DOIUrl":"https://doi.org/10.20380/GI2017.23","url":null,"abstract":"Physically-based accurate soft shadows are typically computed by the evaluation of a visibility function over several point light sources which approximate an area light source. This visibility evaluation is computationally expensive for hundreds of light source samples, providing performance far from real-time. One solution to reduce the computational cost of the visibility evaluation is to adaptively reduce the number of samples required to generate accurate soft shadows. Unfortunately, adaptive area light source sampling is prone to temporal incoherence, generation of banding artifacts and is slower than uniform sampling in some scene configurations. In this paper, we aim to solve these problems by the proposition of a revectorization-based accurate soft shadow algorithm. We take advantage of the improved accuracy obtained with the shadow revectorization to generate accurate soft shadows from a few light source samples, while producing temporally coherent soft shadows at interactive frame rates. Also, we propose an algorithm which restricts the costly accurate soft shadow evaluation for penumbra fragments only. The results obtained show that our approach is, in general, faster than the uniform sampling approach and is more accurate than the real-time soft shadow algorithms.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"181-189"},"PeriodicalIF":0.0,"publicationDate":"2017-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cut and Paint: Occlusion-Aware Subset Selection for Surface Processing 切割和油漆:表面处理的闭塞感知子集选择
Pub Date : 2017-01-01 DOI: 10.20380/GI2017.11
M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer
Surface selection operations by a user are fundamental for many applications and a standard tool in mesh editing software. Unfortunately, defining a selection is only straightforward if the region is visible and on a convex model. Concave surfaces can exhibit self-occlusions, which require using multiple camera positions to obtain unobstructed views. The process thus becomes iterative and cumbersome. Our novel approach enables selections to lie under occlusions and even on the backside of objects and for arbitrary depth complexity at interactive rates. We rely on a user-drawn curve in screen space, which is projected onto the mesh and analyzed with respect to visibility to guarantee a continuous path on the surface. Our occlusion-aware surface-processing method enables a number of applications in an easy way. As examples, we show continuous painting on the surface, selecting regions for texturing, creating illustrative cutaways from nested models and animate them.
用户的曲面选择操作是许多应用程序的基础,也是网格编辑软件中的标准工具。不幸的是,只有当区域在凸模型上可见时,定义选择才简单。凹表面可以表现出自遮挡,这需要使用多个相机位置来获得无遮挡的视图。因此,这个过程变得反复而繁琐。我们的新方法使选择位于遮挡下,甚至在物体的背面,并且在交互速率下具有任意深度的复杂性。我们依赖于用户在屏幕空间中绘制的曲线,它被投影到网格上,并根据可见性进行分析,以保证表面上的连续路径。我们的闭塞感知表面处理方法以一种简单的方式实现了许多应用。作为示例,我们展示了表面上的连续绘画,选择纹理区域,从嵌套模型中创建说明性切线并使其动画化。
{"title":"Cut and Paint: Occlusion-Aware Subset Selection for Surface Processing","authors":"M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer","doi":"10.20380/GI2017.11","DOIUrl":"https://doi.org/10.20380/GI2017.11","url":null,"abstract":"Surface selection operations by a user are fundamental for many applications and a standard tool in mesh editing software. Unfortunately, defining a selection is only straightforward if the region is visible and on a convex model. Concave surfaces can exhibit self-occlusions, which require using multiple camera positions to obtain unobstructed views. The process thus becomes iterative and cumbersome. Our novel approach enables selections to lie under occlusions and even on the backside of objects and for arbitrary depth complexity at interactive rates. We rely on a user-drawn curve in screen space, which is projected onto the mesh and analyzed with respect to visibility to guarantee a continuous path on the surface. Our occlusion-aware surface-processing method enables a number of applications in an easy way. As examples, we show continuous painting on the surface, selecting regions for texturing, creating illustrative cutaways from nested models and animate them.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"4 1","pages":"82-89"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88605051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter Aligned Trimmed Surfaces 参数对齐修剪曲面
Pub Date : 2017-01-01 DOI: 10.20380/GI2017.12
S. Halbert, F. Samavati, Adam Runions
We present a new representation for trimmed parametric surfaces. Given a set of trimming curves in the parametric domain of a surface, our method locally reparametrizes the parameter space to permit accurate representation of these features without partitioning the surface into subsurfaces. Instead, the parameter space is segmented into subspaces containing the trimming curves, the boundaries of which are aligned to the local parameter axes. When multiple trimming curves are present, intersecting subspaces are further segmented using local Voronoı̈ curve diagrams which allows the subspace to be distributed equally between the trimming curves. Transition patches are then used to reparametrize the areas around the trimming curves to accommodate the trimmed edges. This allows for high quality interpolation of the trimmed edges while still allowing parametric referencing and trimmed surface sampling.
提出了一种新的裁剪参数曲面的表示方法。给定曲面参数域中的一组修剪曲线,我们的方法在局部重新参数化参数空间,以便在不将曲面划分为子曲面的情况下准确表示这些特征。相反,参数空间被分割成包含修剪曲线的子空间,这些子空间的边界与局部参数轴对齐。当存在多条修剪曲线时,使用局部voronodin曲线图进一步分割相交的子空间,这使得子空间在修剪曲线之间均匀分布。然后使用过渡补丁来重新参数化修剪曲线周围的区域,以适应修剪的边缘。这允许高质量的插值修剪边缘,同时仍然允许参数参考和修剪表面采样。
{"title":"Parameter Aligned Trimmed Surfaces","authors":"S. Halbert, F. Samavati, Adam Runions","doi":"10.20380/GI2017.12","DOIUrl":"https://doi.org/10.20380/GI2017.12","url":null,"abstract":"We present a new representation for trimmed parametric surfaces. Given a set of trimming curves in the parametric domain of a surface, our method locally reparametrizes the parameter space to permit accurate representation of these features without partitioning the surface into subsurfaces. Instead, the parameter space is segmented into subspaces containing the trimming curves, the boundaries of which are aligned to the local parameter axes. When multiple trimming curves are present, intersecting subspaces are further segmented using local Voronoı̈ curve diagrams which allows the subspace to be distributed equally between the trimming curves. Transition patches are then used to reparametrize the areas around the trimming curves to accommodate the trimmed edges. This allows for high quality interpolation of the trimmed edges while still allowing parametric referencing and trimmed surface sampling.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"23 1","pages":"90-96"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90909121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Conversation with the CHCCS/SCDHM 2016 Achievement Award Winner 与CHCCS/SCDHM 2016成就奖得主的对话
Pub Date : 2016-06-01 DOI: 10.20380/GI2016.01
M. V. D. Panne, P. Kry
This paper constitutes the invited publication that CHCCS extends to the Achievement award winner. This year, we experiment with a new interview format, which permits a casual discussion of the research area, insights, and contributions of the award winner. What follows is an edited version of a conversation that took place on April 7, 2016, via Google Hangouts.
本文构成了CHCCS向成果奖获得者提供的特邀论文。今年,我们尝试了一种新的采访形式,允许对获奖者的研究领域、见解和贡献进行随意讨论。以下是2016年4月7日通过谷歌Hangouts进行的一段对话的编辑版。
{"title":"A Conversation with the CHCCS/SCDHM 2016 Achievement Award Winner","authors":"M. V. D. Panne, P. Kry","doi":"10.20380/GI2016.01","DOIUrl":"https://doi.org/10.20380/GI2016.01","url":null,"abstract":"This paper constitutes the invited publication that CHCCS extends to the Achievement award winner. This year, we experiment with a new interview format, which permits a casual discussion of the research area, insights, and contributions of the award winner. What follows is an edited version of a conversation that took place on April 7, 2016, via Google Hangouts.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"16 1","pages":"1-3"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78569366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RealFusion: An Interactive Workflow for Repurposing Real-World Objects towards Early-stage Creative Ideation RealFusion:用于将现实世界对象重新用于早期创意的交互式工作流程
Pub Date : 2016-06-01 DOI: 10.20380/GI2016.11
Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani
We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.
我们介绍了RealFusion,这是一个交互式工作流程,支持数字3D媒体的早期设计构思。RealFusion的灵感来自于发现对象艺术的实践,其中通过组合现有对象来创建新的表现形式。我们的方法背后的关键动机是在设计构思期间直接创建3D工件,而不是采用2D草图的传统做法。RealFusion包括三种创造性状态,用户可以(a)将物理对象重新用作建模组件,(b)修改组件以探索不同的形式,以及(c)将它们组合成有意义的3D模型。我们演示RealFusion使用一个简单的界面,包括一个深度传感器和智能手机。为了实现对建模元素的直接和有效的操作,我们还利用了与智能手机的空中交互。我们与新手设计师一起进行用户研究,以评估使用RealFusion可以实现的创造性结果。
{"title":"RealFusion: An Interactive Workflow for Repurposing Real-World Objects towards Early-stage Creative Ideation","authors":"Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani","doi":"10.20380/GI2016.11","DOIUrl":"https://doi.org/10.20380/GI2016.11","url":null,"abstract":"We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"34 1","pages":"85-92"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87528076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis 利用傅里叶分析捕获空间变化的各向异性反射参数
Pub Date : 2016-06-01 DOI: 10.20380/GI2016.09
Alban Fichet, Imari Sato, Nicolas Holzschuch
Reflectance parameters condition the appearance of objects in photorealistic rendering. Practical acquisition of reflectance parameters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples required. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small number of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, surface normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level.
在逼真的渲染中,反射参数决定了物体的外观。反射率参数的实际获取仍然是一个难题。对于空间变化或各向异性材料更是如此,这增加了所需样品的数量。在本文中,我们提出了一种获取空间变化的各向异性材料的算法,只采样少量的方向。我们的算法使用傅里叶分析从子采样信号中提取材料参数。我们能够从20个样本方向提取漫反射和镜面反射率、各向异性方向、表面法线和反射率参数。我们的系统没有假设材料的平稳性或规律性,可以在像素水平上恢复各向异性效应。
{"title":"Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis","authors":"Alban Fichet, Imari Sato, Nicolas Holzschuch","doi":"10.20380/GI2016.09","DOIUrl":"https://doi.org/10.20380/GI2016.09","url":null,"abstract":"Reflectance parameters condition the appearance of objects in photorealistic rendering. Practical acquisition of reflectance parameters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples required. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small number of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, surface normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"26 1","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88230039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation 阅读点之间:结合3D标记和FACS分类高质量的混合形状面部动画
Pub Date : 2016-06-01 DOI: 10.20380/GI2016.18
Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker
Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.
基于标记的性能捕获由于其鲁棒性而成为应用最广泛的人脸跟踪方法之一。实际上,基于标记的系统不能完全准确地捕捉到性能,并且经常需要随后的手动调整来包含缺失的视觉细节。即使使用大量的标记,这个问题仍然存在。由于遮挡、交换和合并标记等问题,跟踪大量标记也可能很快变得棘手。我们提出了一种新的方法来拟合混合形状模型,以提高运动捕捉数据的质量,通过利用标记之间的视频中稀疏补片的信息,同时使用更少的标记。我们的方法使用基于分类的方法来检测FACS动作单元及其强度,以帮助求解器在考虑感知质量的同时预测最佳混合形状权重。我们的分类器独立于表演者;一旦训练,它可以应用于多个表演者。考虑到使用头戴式摄像机(HMC)捕获的性能,它提供基于3D面部标记的跟踪和相应的视频,我们将准确的、生产质量的混合形状模型与这些数据相匹配,从而产生高质量的动画。
{"title":"Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation","authors":"Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker","doi":"10.20380/GI2016.18","DOIUrl":"https://doi.org/10.20380/GI2016.18","url":null,"abstract":"Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"8 1","pages":"143-151"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85418774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1