We built an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of a speaker and microphone to be operational, but can be augmented with more speakers. Since these components are already included in most end user systems, our design makes gesture-based input more accessible to a wider range of end users. We are able to detect complex gestures by generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves. We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report error rates comparable to or better than the current state of the art despite additional complexity. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional acoustic gesture recognition.
{"title":"Multiwave: Complex Hand Gesture Recognition Using the Doppler Effect","authors":"Corey R. Pittman, J. Laviola","doi":"10.20380/GI2017.13","DOIUrl":"https://doi.org/10.20380/GI2017.13","url":null,"abstract":"We built an acoustic, gesture-based recognition system called Multiwave, which leverages the Doppler Effect to translate multidimensional movements into user interface commands. Our system only requires the use of a speaker and microphone to be operational, but can be augmented with more speakers. Since these components are already included in most end user systems, our design makes gesture-based input more accessible to a wider range of end users. We are able to detect complex gestures by generating a known high frequency tone from multiple speakers and detecting movement using changes in the sound waves. \u0000 \u0000We present the results of a user study of Multiwave to evaluate recognition rates for different gestures and report error rates comparable to or better than the current state of the art despite additional complexity. We also report subjective user feedback and some lessons learned from our system that provide additional insight for future applications of multidimensional acoustic gesture recognition.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"97-106"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46622632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online software tutorials help a wide range of users acquireskills with complex software, but are not always easy to follow.For example, a tutorial might target users with a high skill level,or it might contain errors and omissions. Prior work has shownthat user contributions, such as user comments, can add value to atutorial. Building on this prior work, we investigate an approachto soliciting structured tutorial enhancements from tutorialreaders. We illustrate this approach through a prototype calledAntorial, and evaluate its impact on reader contributions through amulti-session study with 13 participants. Our findings suggest thatscaffolding tutorial contributions has positive impacts on both thenumber and type of reader contributions. Our findings also pointto design considerations for systems that aim to supportcommunity-based tutorial refinement, and suggest promisingdirections for future research.
{"title":"Tell Me More! Soliciting Reader Contributions to Software Tutorials","authors":"P. Dubois, Volodymyr Dziubak, Andrea Bunt","doi":"10.20380/GI2017.03","DOIUrl":"https://doi.org/10.20380/GI2017.03","url":null,"abstract":"Online software tutorials help a wide range of users acquireskills with complex software, but are not always easy to follow.For example, a tutorial might target users with a high skill level,or it might contain errors and omissions. Prior work has shownthat user contributions, such as user comments, can add value to atutorial. Building on this prior work, we investigate an approachto soliciting structured tutorial enhancements from tutorialreaders. We illustrate this approach through a prototype calledAntorial, and evaluate its impact on reader contributions through amulti-session study with 13 participants. Our findings suggest thatscaffolding tutorial contributions has positive impacts on both thenumber and type of reader contributions. Our findings also pointto design considerations for systems that aim to supportcommunity-based tutorial refinement, and suggest promisingdirections for future research.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"16-23"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46434924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry
We present a framework for generating animated shadow art using occluders under ballistic motion. We apply a stochastic optimization to find the parameters of a multi-body physics simulation that produce a desired shadow at a specific instant in time. We perform simulations across many different initial conditions, applying a set of carefully crafted energy functions to evaluate the motion trajectory and multi-body shadows. We select the optimal parameters, resulting in a ballistics simulation that produces ephemeral shadow art. Users can design physically-plausible dynamic artwork that would be extremely challenging if even possible to achieve manually. We present and analyze number of compelling examples.
{"title":"Ballistic Shadow Art","authors":"Xiaozhong Chen, S. Andrews, D. Nowrouzezahrai, P. Kry","doi":"10.20380/GI2017.24","DOIUrl":"https://doi.org/10.20380/GI2017.24","url":null,"abstract":"We present a framework for generating animated shadow art using occluders under ballistic motion. We apply a stochastic optimization to find the parameters of a multi-body physics simulation that produce a desired shadow at a specific instant in time. We perform simulations across many different initial conditions, applying a set of carefully crafted energy functions to evaluate the motion trajectory and multi-body shadows. We select the optimal parameters, resulting in a ballistics simulation that produces ephemeral shadow art. Users can design physically-plausible dynamic artwork that would be extremely challenging if even possible to achieve manually. We present and analyze number of compelling examples.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"190-198"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44778799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physically-based accurate soft shadows are typically computed by the evaluation of a visibility function over several point light sources which approximate an area light source. This visibility evaluation is computationally expensive for hundreds of light source samples, providing performance far from real-time. One solution to reduce the computational cost of the visibility evaluation is to adaptively reduce the number of samples required to generate accurate soft shadows. Unfortunately, adaptive area light source sampling is prone to temporal incoherence, generation of banding artifacts and is slower than uniform sampling in some scene configurations. In this paper, we aim to solve these problems by the proposition of a revectorization-based accurate soft shadow algorithm. We take advantage of the improved accuracy obtained with the shadow revectorization to generate accurate soft shadows from a few light source samples, while producing temporally coherent soft shadows at interactive frame rates. Also, we propose an algorithm which restricts the costly accurate soft shadow evaluation for penumbra fragments only. The results obtained show that our approach is, in general, faster than the uniform sampling approach and is more accurate than the real-time soft shadow algorithms.
{"title":"Revectorization-Based Accurate Soft Shadow using Adaptive Area Light Source Sampling","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2017.23","DOIUrl":"https://doi.org/10.20380/GI2017.23","url":null,"abstract":"Physically-based accurate soft shadows are typically computed by the evaluation of a visibility function over several point light sources which approximate an area light source. This visibility evaluation is computationally expensive for hundreds of light source samples, providing performance far from real-time. One solution to reduce the computational cost of the visibility evaluation is to adaptively reduce the number of samples required to generate accurate soft shadows. Unfortunately, adaptive area light source sampling is prone to temporal incoherence, generation of banding artifacts and is slower than uniform sampling in some scene configurations. In this paper, we aim to solve these problems by the proposition of a revectorization-based accurate soft shadow algorithm. We take advantage of the improved accuracy obtained with the shadow revectorization to generate accurate soft shadows from a few light source samples, while producing temporally coherent soft shadows at interactive frame rates. Also, we propose an algorithm which restricts the costly accurate soft shadow evaluation for penumbra fragments only. The results obtained show that our approach is, in general, faster than the uniform sampling approach and is more accurate than the real-time soft shadow algorithms.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"181-189"},"PeriodicalIF":0.0,"publicationDate":"2017-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42348865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer
Surface selection operations by a user are fundamental for many applications and a standard tool in mesh editing software. Unfortunately, defining a selection is only straightforward if the region is visible and on a convex model. Concave surfaces can exhibit self-occlusions, which require using multiple camera positions to obtain unobstructed views. The process thus becomes iterative and cumbersome. Our novel approach enables selections to lie under occlusions and even on the backside of objects and for arbitrary depth complexity at interactive rates. We rely on a user-drawn curve in screen space, which is projected onto the mesh and analyzed with respect to visibility to guarantee a continuous path on the surface. Our occlusion-aware surface-processing method enables a number of applications in an easy way. As examples, we show continuous painting on the surface, selecting regions for texturing, creating illustrative cutaways from nested models and animate them.
{"title":"Cut and Paint: Occlusion-Aware Subset Selection for Surface Processing","authors":"M. Radwan, S. Ohrhallinger, E. Eisemann, M. Wimmer","doi":"10.20380/GI2017.11","DOIUrl":"https://doi.org/10.20380/GI2017.11","url":null,"abstract":"Surface selection operations by a user are fundamental for many applications and a standard tool in mesh editing software. Unfortunately, defining a selection is only straightforward if the region is visible and on a convex model. Concave surfaces can exhibit self-occlusions, which require using multiple camera positions to obtain unobstructed views. The process thus becomes iterative and cumbersome. Our novel approach enables selections to lie under occlusions and even on the backside of objects and for arbitrary depth complexity at interactive rates. We rely on a user-drawn curve in screen space, which is projected onto the mesh and analyzed with respect to visibility to guarantee a continuous path on the surface. Our occlusion-aware surface-processing method enables a number of applications in an easy way. As examples, we show continuous painting on the surface, selecting regions for texturing, creating illustrative cutaways from nested models and animate them.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"4 1","pages":"82-89"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88605051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new representation for trimmed parametric surfaces. Given a set of trimming curves in the parametric domain of a surface, our method locally reparametrizes the parameter space to permit accurate representation of these features without partitioning the surface into subsurfaces. Instead, the parameter space is segmented into subspaces containing the trimming curves, the boundaries of which are aligned to the local parameter axes. When multiple trimming curves are present, intersecting subspaces are further segmented using local Voronoı̈ curve diagrams which allows the subspace to be distributed equally between the trimming curves. Transition patches are then used to reparametrize the areas around the trimming curves to accommodate the trimmed edges. This allows for high quality interpolation of the trimmed edges while still allowing parametric referencing and trimmed surface sampling.
{"title":"Parameter Aligned Trimmed Surfaces","authors":"S. Halbert, F. Samavati, Adam Runions","doi":"10.20380/GI2017.12","DOIUrl":"https://doi.org/10.20380/GI2017.12","url":null,"abstract":"We present a new representation for trimmed parametric surfaces. Given a set of trimming curves in the parametric domain of a surface, our method locally reparametrizes the parameter space to permit accurate representation of these features without partitioning the surface into subsurfaces. Instead, the parameter space is segmented into subspaces containing the trimming curves, the boundaries of which are aligned to the local parameter axes. When multiple trimming curves are present, intersecting subspaces are further segmented using local Voronoı̈ curve diagrams which allows the subspace to be distributed equally between the trimming curves. Transition patches are then used to reparametrize the areas around the trimming curves to accommodate the trimmed edges. This allows for high quality interpolation of the trimmed edges while still allowing parametric referencing and trimmed surface sampling.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"23 1","pages":"90-96"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90909121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper constitutes the invited publication that CHCCS extends to the Achievement award winner. This year, we experiment with a new interview format, which permits a casual discussion of the research area, insights, and contributions of the award winner. What follows is an edited version of a conversation that took place on April 7, 2016, via Google Hangouts.
{"title":"A Conversation with the CHCCS/SCDHM 2016 Achievement Award Winner","authors":"M. V. D. Panne, P. Kry","doi":"10.20380/GI2016.01","DOIUrl":"https://doi.org/10.20380/GI2016.01","url":null,"abstract":"This paper constitutes the invited publication that CHCCS extends to the Achievement award winner. This year, we experiment with a new interview format, which permits a casual discussion of the research area, insights, and contributions of the award winner. What follows is an edited version of a conversation that took place on April 7, 2016, via Google Hangouts.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"16 1","pages":"1-3"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78569366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani
We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.
{"title":"RealFusion: An Interactive Workflow for Repurposing Real-World Objects towards Early-stage Creative Ideation","authors":"Cecil Piya, Vinayak Vinayak, Yunbo Zhang, K. Ramani","doi":"10.20380/GI2016.11","DOIUrl":"https://doi.org/10.20380/GI2016.11","url":null,"abstract":"We present RealFusion, an interactive workflow that supports early stage design ideation in a digital 3D medium. RealFusion is inspired by the practice of found-object-art, wherein new representations are created by composing existing objects. The key motivation behind our approach is direct creation of 3D artifacts during design ideation, in contrast to conventional practice of employing 2D sketching. RealFusion comprises of three creative states where users can (a) repurpose physical objects as modeling components, (b) modify the components to explore different forms, and (c) compose them into a meaningful 3D model. We demonstrate RealFusion using a simple interface that comprises of a depth sensor and a smartphone. To achieve direct and efficient manipulation of modeling elements, we also utilize mid-air interactions with the smartphone. We conduct a user study with novice designers to evaluate the creative outcomes that can be achieved using RealFusion.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"34 1","pages":"85-92"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87528076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reflectance parameters condition the appearance of objects in photorealistic rendering. Practical acquisition of reflectance parameters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples required. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small number of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, surface normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level.
{"title":"Capturing Spatially Varying Anisotropic Reflectance Parameters using Fourier Analysis","authors":"Alban Fichet, Imari Sato, Nicolas Holzschuch","doi":"10.20380/GI2016.09","DOIUrl":"https://doi.org/10.20380/GI2016.09","url":null,"abstract":"Reflectance parameters condition the appearance of objects in photorealistic rendering. Practical acquisition of reflectance parameters is still a difficult problem. Even more so for spatially varying or anisotropic materials, which increase the number of samples required. In this paper, we present an algorithm for acquisition of spatially varying anisotropic materials, sampling only a small number of directions. Our algorithm uses Fourier analysis to extract the material parameters from a sub-sampled signal. We are able to extract diffuse and specular reflectance, direction of anisotropy, surface normal and reflectance parameters from as little as 20 sample directions. Our system makes no assumption about the stationarity or regularity of the materials, and can recover anisotropic effects at the pixel level.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"26 1","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88230039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker
Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.
{"title":"Reading Between the Dots: Combining 3D Markers and FACS Classification for High-Quality Blendshape Facial Animation","authors":"Shridhar Ravikumar, Colin Davidson, Dmitry Kit, N. Campbell, L. Benedetti, D. Cosker","doi":"10.20380/GI2016.18","DOIUrl":"https://doi.org/10.20380/GI2016.18","url":null,"abstract":"Marker based performance capture is one of the most widely used approaches for facial tracking owing to its robustness. In practice, marker based systems do not capture the performance with complete fidelity and often require subsequent manual adjustment to incorporate missing visual details. This problem persists even when using larger number of markers. Tracking a large number of markers can also quickly become intractable due to issues such as occlusion, swapping and merging of markers. We present a new approach for fitting blendshape models to motion-capture data that improves quality, by exploiting information from sparse make-up patches in the video between the markers, while using fewer markers. Our method uses a classification based approach that detects FACS Action Units and their intensities to assist the solver in predicting optimal blendshape weights while taking perceptual quality into consideration. Our classifier is independent of the performer; once trained, it can be applied to multiple performers. Given performances captured using a Head Mounted Camera (HMC), which provides 3D facial marker based tracking and corresponding video, we fit accurate, production quality blendshape models to this data resulting in high-quality animations.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"8 1","pages":"143-151"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85418774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}