Constant Manteau, Miguel A. Nacenta, Michael Mauderer
We empirically investigate the advantages and disadvantages of color and digit-based methods to represent small scalar fields. We compare two types of color scales (one brightness-based and one that varies in hue, saturation and brightness) with an interactive tooltip that shows the scalar value on demand, and with a symbolic glyph-based approach (FatFonts). Three experiments tested three tasks: reading values, comparing values, and finding extrema. The results provide the first empirical comparisons of color scales with symbol-based techniques. The interactive tooltip enabled higher accuracy and shorter times than the color scales for reading values but showed slow completion times and low accuracy for value comparison and extrema finding tasks. The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task.
{"title":"Reading Small Scalar Data Fields: Color Scales vs. Detail on Demand vs. FatFonts","authors":"Constant Manteau, Miguel A. Nacenta, Michael Mauderer","doi":"10.20380/GI2017.07","DOIUrl":"https://doi.org/10.20380/GI2017.07","url":null,"abstract":"We empirically investigate the advantages and disadvantages of color and digit-based methods to represent small scalar fields. We compare two types of color scales (one brightness-based and one that varies in hue, saturation and brightness) with an interactive tooltip that shows the scalar value on demand, and with a symbolic glyph-based approach (FatFonts). Three experiments tested three tasks: reading values, comparing values, and finding extrema. The results provide the first empirical comparisons of color scales with symbol-based techniques. The interactive tooltip enabled higher accuracy and shorter times than the color scales for reading values but showed slow completion times and low accuracy for value comparison and extrema finding tasks. The FatFonts technique showed better speed and accuracy for reading and value comparison, and high accuracy for the extrema finding task at the cost of being the slowest for this task.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"50-56"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42719833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seongkook Heo, M. Annett, B. Lafreniere, Tovi Grossman, G. Fitzmaurice
Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or both hands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.
{"title":"No Need to Stop What You're Doing: Exploring No-Handed Smartwatch Interaction","authors":"Seongkook Heo, M. Annett, B. Lafreniere, Tovi Grossman, G. Fitzmaurice","doi":"10.20380/GI2017.14","DOIUrl":"https://doi.org/10.20380/GI2017.14","url":null,"abstract":"Smartwatches have the potential to enable quick micro-interactions throughout daily life. However, because they require both hands to operate, their full potential is constrained, particularly in situations where the user is actively performing a task with their hands. We investigate the space of no-handed interaction with smartwatches in scenarios where one or both hands are not free. Specifically, we present a taxonomy of scenarios in which standard touchscreen interaction with smartwatches is not possible, and discuss the key constraints that limit such interaction. We then implement a set of interaction techniques and evaluate them via two user studies: one where participants viewed video clips of the techniques and another where participants used the techniques in simulated hand-constrained scenarios. Our results found a preference for foot-based interaction and reveal novel design considerations to be mindful of when designing for no-handed smartwatch interaction scenarios.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"107-114"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45568114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Adhitya Saputra, C. Kaplan, P. Asente, R. Mech
We present a technique for drawing ornamental designs consisting of placed instances of simple shapes. These shapes, which we call elements, are selected from a small library of templates. The elements are deformed to flow along a direction field interpolated from user-supplied strokes, giving a sense of visual flow to the final composition, and constrained to lie within a container region. Our implementation computes a vector field based on user strokes, constructs streamlines that conform to the vector field, and places an element over each streamline. An iterative refinement process then shifts and stretches the elements to improve the composition.
{"title":"FLOWPAK: Flow-based Ornamental Element Packing","authors":"Reza Adhitya Saputra, C. Kaplan, P. Asente, R. Mech","doi":"10.20380/GI2017.02","DOIUrl":"https://doi.org/10.20380/GI2017.02","url":null,"abstract":"We present a technique for drawing ornamental designs consisting of placed instances of simple shapes. These shapes, which we call elements, are selected from a small library of templates. The elements are deformed to flow along a direction field interpolated from user-supplied strokes, giving a sense of visual flow to the final composition, and constrained to lie within a container region. Our implementation computes a vector field based on user strokes, constructs streamlines that conform to the vector field, and places an element over each streamline. An iterative refinement process then shifts and stretches the elements to improve the composition.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"8-15"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47720517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Senthil K. Chandrasegaran, Sriram Karthik Badam, Ninger Zhou, Zhenpeng Zhao, Lorraine G. Kisselburgh, K. Peppler, N. Elmqvist, K. Ramani
Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based “object select” technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches.
{"title":"Merging Sketches for Creative Design Exploration: An Evaluation of Physical and Cognitive Operations","authors":"Senthil K. Chandrasegaran, Sriram Karthik Badam, Ninger Zhou, Zhenpeng Zhao, Lorraine G. Kisselburgh, K. Peppler, N. Elmqvist, K. Ramani","doi":"10.20380/GI2017.15","DOIUrl":"https://doi.org/10.20380/GI2017.15","url":null,"abstract":"Despite its grounding in creativity techniques, merging multiple source sketches to create new ideas has received scant attention in design literature. In this paper, we identify the physical operations that in merging sketch components. We also introduce cognitive operations of reuse, repurpose, refactor, and reinterpret, and explore their relevance to creative design. To examine the relationship of cognitive operations, physical techniques, and creative sketch outcomes, we conducted a qualitative user study where student designers merged existing sketches to generate either an alternative design, or an unrelated new design. We compared two digital selection techniques: freeform selection, and a stroke-cluster-based “object select” technique. The resulting merge sketches were subjected to crowdsourced evaluation of these sketches, and manual coding for the use of cognitive operations. Our findings establish a firm connection between the proposed cognitive operations and the context and outcome of creative tasks. Key findings indicate that reinterpret cognitive operations correlate strongly with creativity in merged sketches, while reuse operations correlate negatively with creativity. Furthermore, freeform selection techniques are preferred significantly by designers. We discuss the empirical contributions of understanding the use of cognitive operations during design exploration, and the practical implications for designing interfaces in digital tools that facilitate creativity in merging sketches.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"115-123"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41768575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Holderness, Jared N. Bott, P. Wisniewski, J. Laviola
In this paper we examine two methods for using relative contact size as an interaction technique for 3D environments on multi-touch capacitive touch screens. We refer to interpreting relative contact size changes as “pressure” simulation. We conducted a 2 x 2 within subjects experimental design using two methods for pressure estimation (calibrated and comparative) and two different 3D tasks (bidirectional and unidirectional). Calibrated pressure estimation was based upon a calibration session, whereas comparative pressure estimation was based upon the contact size of each initial touch. The bidirectional task was guiding a ball through a hoop, while the unidirectional task involved using pressure to rotate a stove knob. Results indicate that the preferred and best performing pressure estimation technique was dependent on the 3D task. For the bidirectional task, calibrated pressure performed significantly better, while the comparative method performed better for the unidirectional task. We discuss the implications and future research directions based on our findings.
在本文中,我们研究了两种使用相对接触尺寸作为多触点电容触摸屏三维环境交互技术的方法。我们把解释相对接触尺寸变化称为“压力”模拟。我们使用两种压力估计方法(校准和比较)和两种不同的3D任务(双向和单向)进行了2 x 2的受试者实验设计。校准压力估计是基于校准会话,而比较压力估计是基于每次初始触摸的接触尺寸。双向任务是引导一个球穿过一个箍,而单向任务是用压力旋转一个炉子旋钮。结果表明,首选和最佳的压力估计技术取决于三维任务。对于双向任务,校准压力的效果明显更好,而比较方法在单向任务中效果更好。根据研究结果,讨论了研究的意义和未来的研究方向。
{"title":"Exploring Multi-touch Contact Size for Z-Axis Movement in 3D Environments","authors":"S. Holderness, Jared N. Bott, P. Wisniewski, J. Laviola","doi":"10.20380/GI2017.09","DOIUrl":"https://doi.org/10.20380/GI2017.09","url":null,"abstract":"In this paper we examine two methods for using relative contact size as an interaction technique for 3D environments on multi-touch capacitive touch screens. We refer to interpreting relative contact size changes as “pressure” simulation. We conducted a 2 x 2 within subjects experimental design using two methods for pressure estimation (calibrated and comparative) and two different 3D tasks (bidirectional and unidirectional). Calibrated pressure estimation was based upon a calibration session, whereas comparative pressure estimation was based upon the contact size of each initial touch. The bidirectional task was guiding a ball through a hoop, while the unidirectional task involved using pressure to rotate a stove knob. Results indicate that the preferred and best performing pressure estimation technique was dependent on the 3D task. For the bidirectional task, calibrated pressure performed significantly better, while the comparative method performed better for the unidirectional task. We discuss the implications and future research directions based on our findings.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"65-73"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41353240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large vertical displays are considered well adapted for collaboration, due to their display surface and the space in front of them that can accommodate multiple people. However, there are few studies that empirically support this assertion, and they do not quantitatively assess the differences of collaboration in front of a shared display compared to a non-shared setup, such as multiple desktops with a common view. In this paper, we compare a large shared vertical display with two desktops, when pairs of users learn to perform a path-planning task. Our results did not indicate a significant difference in learning between the two setups, but found that participants adopted different task strategies. Moreover, while pairs were overall faster with the two desktops, quality was more consistent in the vertical shared display where pairs spent more time communicating, even though there is a-priori more implicit collaboration in this setup.
{"title":"Trade-offs Between a Vertical Shared Display and Two Desktops in a Collaborative Path-Finding Task","authors":"Arnaud Prouzeau, A. Bezerianos, O. Chapuis","doi":"10.20380/GI2017.27","DOIUrl":"https://doi.org/10.20380/GI2017.27","url":null,"abstract":"Large vertical displays are considered well adapted for collaboration, due to their display surface and the space in front of them that can accommodate multiple people. However, there are few studies that empirically support this assertion, and they do not quantitatively assess the differences of collaboration in front of a shared display compared to a non-shared setup, such as multiple desktops with a common view. In this paper, we compare a large shared vertical display with two desktops, when pairs of users learn to perform a path-planning task. Our results did not indicate a significant difference in learning between the two setups, but found that participants adopted different task strategies. Moreover, while pairs were overall faster with the two desktops, quality was more consistent in the vertical shared display where pairs spent more time communicating, even though there is a-priori more implicit collaboration in this setup.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"214-219"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41437707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a novel model for coupling continuous chemical diffusion and discrete cellular events inside a biologically inspired simulation environment. Our goal is to define and explore a minimalist set of features that are also expressive, enabling the creation of complex and plausible 2D patterns using just a few rules. By not being constrained into a static or regular grid, we show that many different phenomena can be simulated, such as traditional reaction-diffusion systems, cellular automata, and pigmentation patterns from living beings. In particular, we demonstrate that adding chemical saturation increases significantly the range of simulated patterns using reaction-diffusion, including patterns not possible before such as the leopard rosettes. Our results suggest a possible universal model that can integrate previous pattern formation approaches, providing new ground for experimentation, and realistic-looking textures for general use in Computer Graphics.
{"title":"Pattern formation through minimalist biologically inspired cellular simulation","authors":"M. Malheiros, M. Walter","doi":"10.20380/GI2017.19","DOIUrl":"https://doi.org/10.20380/GI2017.19","url":null,"abstract":"This paper describes a novel model for coupling continuous chemical diffusion and discrete cellular events inside a biologically inspired simulation environment. Our goal is to define and explore a minimalist set of features that are also expressive, enabling the creation of complex and plausible 2D patterns using just a few rules. By not being constrained into a static or regular grid, we show that many different phenomena can be simulated, such as traditional reaction-diffusion systems, cellular automata, and pigmentation patterns from living beings. In particular, we demonstrate that adding chemical saturation increases significantly the range of simulated patterns using reaction-diffusion, including patterns not possible before such as the leopard rosettes. Our results suggest a possible universal model that can integrate previous pattern formation approaches, providing new ground for experimentation, and realistic-looking textures for general use in Computer Graphics.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"148-155"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42926741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a methodology for the interactive definition of curves and motion paths using a stochastic formulation of optimal control. We demonstrate how the same optimization framework can be used in different ways to generate curves and traces that are geometrically and dynamically similar to the ones that can be seen in art forms such as calligraphy or graffiti art. The method provides a probabilistic description of trajectories that can be edited similarly to the control polygon typically used in the popular spline based methods. Furthermore, it also encapsulates movement kinematics, deformations and variability. The user is then provided with a simple interactive interface that can generate multiple movements and traces at once, by visually defining a distribution of trajectories rather than a single one. The input to our method is a sparse sequence of targets defined as multivariate Gaussians. The output is a dynamical system generating curves that are natural looking and reflect the kinematics of a movement, similar to that produced by human drawing or writing.
{"title":"Generating Calligraphic Trajectories with Model Predictive Control","authors":"Daniel Berio, S. Calinon, F. Leymarie","doi":"10.20380/GI2017.17","DOIUrl":"https://doi.org/10.20380/GI2017.17","url":null,"abstract":"We describe a methodology for the interactive definition of curves and motion paths using a stochastic formulation of optimal control. We demonstrate how the same optimization framework can be used in different ways to generate curves and traces that are geometrically and dynamically similar to the ones that can be seen in art forms such as calligraphy or graffiti art. The method provides a probabilistic description of trajectories that can be edited similarly to the control polygon typically used in the popular spline based methods. Furthermore, it also encapsulates movement kinematics, deformations and variability. The user is then provided with a simple interactive interface that can generate multiple movements and traces at once, by visually defining a distribution of trajectories rather than a single one. The input to our method is a sparse sequence of targets defined as multivariate Gaussians. The output is a dynamical system generating curves that are natural looking and reflect the kinematics of a movement, similar to that produced by human drawing or writing.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"132-139"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49504399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel pipeline to generate a depth map from a single image that can be used as input for a variety of artistic depth-based effects. In such a context, the depth maps do not have to be perfect but are rather designed with respect to a desired result. Consequently, our solution centers around user interaction and relies on a scribble-based depth editing. The annotations can be sparse, as the depth map is generated by a diffusion process, which is guided by image features. Additionally, we support a variety of controls, such as a non-linear depth mapping, a steering mechanism for the diffusion (e.g., directionality, emphasis, or reduction of the influence of image cues), and besides absolute, we also support relative depth indications. We demonstrate a variety of artistic 3D results, including wiggle stereoscopy and depth of field.
{"title":"Depth Map Design and Depth-based Effects With a Single Image","authors":"J. Liao, Shuheng Shen, E. Eisemann","doi":"10.20380/GI2017.08","DOIUrl":"https://doi.org/10.20380/GI2017.08","url":null,"abstract":"We present a novel pipeline to generate a depth map from a single image that can be used as input for a variety of artistic depth-based effects. In such a context, the depth maps do not have to be perfect but are rather designed with respect to a desired result. Consequently, our solution centers around user interaction and relies on a scribble-based depth editing. The annotations can be sparse, as the depth map is generated by a diffusion process, which is guided by image features. Additionally, we support a variety of controls, such as a non-linear depth mapping, a steering mechanism for the diffusion (e.g., directionality, emphasis, or reduction of the influence of image cues), and besides absolute, we also support relative depth indications. We demonstrate a variety of artistic 3D results, including wiggle stereoscopy and depth of field.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"57-64"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47974977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image projection is important for many applications in entertainment industry, augmented reality, and computer graphics. However, perceived distortion is often introduced by projection, which is a common problem of a projector system. Compensating such distortion for projection on non-trivial surfaces is often very challenging. In this paper, we propose a novel method to pre-warp the image such that it appears as distortion-free as possible on the surface after projection. Our method estimates a desired optimal warping function via an optimization framework. Specifically, we design an objective energy function that models the perceived distortion in projection results. By taking into account both the geometry of the surface and the image content, our method can produce more visually plausible projection results compared with traditional projector systems. We demonstrate the effectiveness of our method with projection results on a wide variety of images and surface geometries.
{"title":"Content and Surface Aware Projection","authors":"Long Mai, Hoang Le, Feng Liu","doi":"10.20380/GI2017.04","DOIUrl":"https://doi.org/10.20380/GI2017.04","url":null,"abstract":"Image projection is important for many applications in entertainment industry, augmented reality, and computer graphics. However, perceived distortion is often introduced by projection, which is a common problem of a projector system. Compensating such distortion for projection on non-trivial surfaces is often very challenging. In this paper, we propose a novel method to pre-warp the image such that it appears as distortion-free as possible on the surface after projection. Our method estimates a desired optimal warping function via an optimization framework. Specifically, we design an objective energy function that models the perceived distortion in projection results. By taking into account both the geometry of the surface and the image content, our method can produce more visually plausible projection results compared with traditional projector systems. We demonstrate the effectiveness of our method with projection results on a wide variety of images and surface geometries.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"24-32"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48762020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}