首页 > 最新文献

Proceedings. Graphics Interface (Conference)最新文献

英文 中文
Session details: Session G2: Geometry & Style 课程详情:课程G2:几何与风格
Pub Date : 2018-06-01 DOI: 10.5555/3374362.3374425
Alec Jacobson
{"title":"Session details: Session G2: Geometry & Style","authors":"Alec Jacobson","doi":"10.5555/3374362.3374425","DOIUrl":"https://doi.org/10.5555/3374362.3374425","url":null,"abstract":"","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47954305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Characteristics of a Camera-Based Tangible Input Device for Manipulation of 3D Information 一种基于摄像头的三维信息操作有形输入设备的性能特点
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.10
Zeyuan Chen, C. Healey, R. Amant
This paper describes a prototype tangible six degree of freedom (6 DoF) input device that is inexpensive and intuitive to use: a cube with colored corners of specific shapes, tracked by a single camera, with pose estimated in real time. A tracking and automatic color adjustment system are designed so that the device can work robustly with noisy surroundings and is invariant to changes in lighting and background noise. A system evaluation shows good performance for both refresh (above 60 FPS on average) and accuracy of pose estimation (average angular error of about 1). A user study of 3D rotation tasks shows that the device outperforms other 6 DoF input devices used in a similar desktop environment. The device has the potential to facilitate interactive applications such as games as well as viewing 3D information.
本文描述了一种原型有形六自由度(6 DoF)输入设备,它价格低廉,使用直观:一个具有特定形状的彩色角的立方体,由单个摄像机跟踪,实时估计姿态。设计了跟踪和自动调色系统,使设备能够在噪声环境下鲁棒工作,并且不受光照和背景噪声变化的影响。系统评估显示,该设备在刷新(平均每秒60帧以上)和姿态估计精度(平均角度误差约为1)方面都表现良好。对3D旋转任务的用户研究表明,该设备优于类似桌面环境中使用的其他6自由度输入设备。该设备有可能促进互动应用,如游戏和观看3D信息。
{"title":"Performance Characteristics of a Camera-Based Tangible Input Device for Manipulation of 3D Information","authors":"Zeyuan Chen, C. Healey, R. Amant","doi":"10.20380/GI2017.10","DOIUrl":"https://doi.org/10.20380/GI2017.10","url":null,"abstract":"This paper describes a prototype tangible six degree of freedom (6 DoF) input device that is inexpensive and intuitive to use: a cube with colored corners of specific shapes, tracked by a single camera, with pose estimated in real time. A tracking and automatic color adjustment system are designed so that the device can work robustly with noisy surroundings and is invariant to changes in lighting and background noise. A system evaluation shows good performance for both refresh (above 60 FPS on average) and accuracy of pose estimation (average angular error of about 1). A user study of 3D rotation tasks shows that the device outperforms other 6 DoF input devices used in a similar desktop environment. The device has the potential to facilitate interactive applications such as games as well as viewing 3D information.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"74-81"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45540781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Collaborative 3D Modeling by the Crowd 协同3D建模的人群
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.16
Ryohei Suzuki, T. Igarashi
We propose a collaborative 3D modeling system that deconstructs the complex 3D modeling process into a collection of simple tasks to be executed by nonprofessional crowd workers. Given a 2D image showing a target object, each crowd worker is directed to draw a simple sketch representing an orthographic view of the object, using their visual cognition and real-world knowledge. The system then synthesizes a 3D model by integrating the geometrical information obtained from a collection of gathered sketches. We show a set of algorithms that generates clean line drawings and a 3D model from a collection of incomplete sketches containing a considerable amount of errors and inconsistencies. We also discuss a crowdsourcing workflow that iteratively improves the quality of submitted sketches. It introduces competition between workers using extra rewards based on peer-reviewing as well as an example-sharing mechanism to help workers understand the task requirements and quality standards. The proposed system can produce decent-quality 3D geometries of various objects within a few hours.
我们提出了一个协同3D建模系统,将复杂的3D建模过程解构为一系列简单的任务,由非专业人群工作者执行。给定一个显示目标物体的2D图像,每个人群工作者都被指示用他们的视觉认知和现实世界的知识绘制一个简单的草图,代表物体的正射线图。然后,系统通过整合从收集的草图集合中获得的几何信息来合成三维模型。我们展示了一组算法,可以从包含大量错误和不一致的不完整草图的集合中生成干净的线条图和3D模型。我们还讨论了一个众包工作流程,迭代地提高提交的草图的质量。它通过基于同行评议的额外奖励,以及帮助员工理解任务要求和质量标准的范例分享机制,引入了员工之间的竞争。该系统可以在几个小时内生成各种物体的高质量3D几何形状。
{"title":"Collaborative 3D Modeling by the Crowd","authors":"Ryohei Suzuki, T. Igarashi","doi":"10.20380/GI2017.16","DOIUrl":"https://doi.org/10.20380/GI2017.16","url":null,"abstract":"We propose a collaborative 3D modeling system that deconstructs the complex 3D modeling process into a collection of simple tasks to be executed by nonprofessional crowd workers. Given a 2D image showing a target object, each crowd worker is directed to draw a simple sketch representing an orthographic view of the object, using their visual cognition and real-world knowledge. The system then synthesizes a 3D model by integrating the geometrical information obtained from a collection of gathered sketches. We show a set of algorithms that generates clean line drawings and a 3D model from a collection of incomplete sketches containing a considerable amount of errors and inconsistencies. We also discuss a crowdsourcing workflow that iteratively improves the quality of submitted sketches. It introduces competition between workers using extra rewards based on peer-reviewing as well as an example-sharing mechanism to help workers understand the task requirements and quality standards. The proposed system can produce decent-quality 3D geometries of various objects within a few hours.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"124-131"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48873747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Euclidean Distance Transform Shadow Mapping 欧氏距离变换阴影映射
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.22
Márcio C. F. Macedo, A. Apolinario
The high-quality simulation of the penumbra effect in real-time shadows is a challenging problem in shadow mapping. The existing shadow map filtering techniques are prone to aliasing and light leaking artifacts which decrease the shadow visual quality. In this paper, we aim to minimize both problems with the Euclidean distance transform shadow mapping. To reduce the perspective aliasing artifacts generated by shadow mapping, we revectorize the hard shadow boundaries using the revectorization-based shadow mapping. Then, an exact normalized Euclidean distance transform is computed in the user-defined penumbra region to simulate the penumbra effect. Finally, a mean filter is applied to further suppress skeleton artifacts generated by the distance transform. The results obtained show that our technique runs entirely on the GPU, produces less artifacts than related work, and provides real-time performance.
实时阴影中半影效果的高质量模拟是阴影映射中的一个难题。现有的阴影贴图滤波技术容易产生混叠和漏光现象,从而降低阴影的视觉质量。在本文中,我们的目标是用欧氏距离变换阴影映射最小化这两个问题。为了减少阴影映射产生的透视混叠伪影,我们使用基于反向的阴影映射对硬阴影边界进行反向。然后,在用户定义的半影区域计算精确的归一化欧氏距离变换来模拟半影效果。最后,利用均值滤波进一步抑制距离变换产生的骨架伪影。结果表明,我们的技术完全运行在GPU上,比相关工作产生更少的伪影,并提供实时性能。
{"title":"Euclidean Distance Transform Shadow Mapping","authors":"Márcio C. F. Macedo, A. Apolinario","doi":"10.20380/GI2017.22","DOIUrl":"https://doi.org/10.20380/GI2017.22","url":null,"abstract":"The high-quality simulation of the penumbra effect in real-time shadows is a challenging problem in shadow mapping. The existing shadow map filtering techniques are prone to aliasing and light leaking artifacts which decrease the shadow visual quality. In this paper, we aim to minimize both problems with the Euclidean distance transform shadow mapping. To reduce the perspective aliasing artifacts generated by shadow mapping, we revectorize the hard shadow boundaries using the revectorization-based shadow mapping. Then, an exact normalized Euclidean distance transform is computed in the user-defined penumbra region to simulate the penumbra effect. Finally, a mean filter is applied to further suppress skeleton artifacts generated by the distance transform. The results obtained show that our technique runs entirely on the GPU, produces less artifacts than related work, and provides real-time performance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"171-180"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43230523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data 提升条形图:评估树状图与包裹条形图,以实现排序数字数据的密集可视化
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.06
M. A. Yalçın, N. Elmqvist, B. Bederson
A standard (single-column) bar chart can effectively visualize a sorted list of numeric records. However, the chart height limits the number of visible records. To show more records, the bars could be made thinner (which could hinder identifying records individually), and scrolling requires interaction to see the overview. Treemaps have been used in practice in non-hierarchical settings for dense visualization of numeric data. Alternatively, we consider wrapped bars, a multi-column bar chart that uses length instead of area to encode numeric values. We compare treemaps and wrapped bars based on their design characteristics, and graphical perception performance for comparison, ranking, and overview tasks using crowdsourced experiments. Our analysis found that wrapped bars perceptually outperform treemaps in all three tasks for dense visualization of non-hierarchical, sorted numeric data.
标准(单列)条形图可以有效地可视化数字记录的排序列表。但是,图表高度限制了可见记录的数量。为了显示更多的记录,可以将条形图做得更薄(这可能会阻碍单独识别记录),滚动需要交互才能查看概述。树映射已在非层次设置中用于数字数据的密集可视化。或者,我们考虑换行条形图,这是一种多列条形图,使用长度而不是面积来编码数值。我们根据树图和包装条的设计特征,以及使用众包实验进行比较、排名和概览任务的图形感知性能,对它们进行比较。我们的分析发现,对于非分层、排序的数字数据的密集可视化,包裹条在所有三项任务中的感知能力都优于树图。
{"title":"Raising the Bars: Evaluating Treemaps vs. Wrapped Bars for Dense Visualization of Sorted Numeric Data","authors":"M. A. Yalçın, N. Elmqvist, B. Bederson","doi":"10.20380/GI2017.06","DOIUrl":"https://doi.org/10.20380/GI2017.06","url":null,"abstract":"A standard (single-column) bar chart can effectively visualize a sorted list of numeric records. However, the chart height limits the number of visible records. To show more records, the bars could be made thinner (which could hinder identifying records individually), and scrolling requires interaction to see the overview. Treemaps have been used in practice in non-hierarchical settings for dense visualization of numeric data. Alternatively, we consider wrapped bars, a multi-column bar chart that uses length instead of area to encode numeric values. We compare treemaps and wrapped bars based on their design characteristics, and graphical perception performance for comparison, ranking, and overview tasks using crowdsourced experiments. Our analysis found that wrapped bars perceptually outperform treemaps in all three tasks for dense visualization of non-hierarchical, sorted numeric data.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"41-49"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47187665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Supporting Team-First Visual Analytics through Group Activity Representations 通过小组活动表示支持团队优先的可视化分析
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.26
Sriram Karthik Badam, Zehua Zeng, Emily Wall, A. Endert, N. Elmqvist
Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of “team-first” visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination.
协作视觉分析(CVA)涉及分析师团队内基于团队成员工作协调、团队活动意识以及假设、观察和见解交流的感知活动。我们引入了一种基于“团队优先”视觉分析概念的新型CVA工具,其中支持整个团队的分析过程和需求是图形用户界面的主要重点,而不是单个分析师的重点。为此,我们提供了团队优先工具的设计空间和指导方针,以传达界面中分析师的存在、关注和活动。然后,我们介绍了InsightsDrive,这是一个用于多维数据的CVA工具,它通过小组活动可视化将团队优先的功能包含到界面中。这包括(1)使用颜色编码的选择阴影显示集成在数据可视化中的所有用户的焦点区域的原位表示,以及(2)使用多维视觉表示显示每个分析员的数据覆盖率的非原位表示。我们进行了两项用户研究,一项是与个人分析师一起确定不同视觉表示的可供性,以告知数据覆盖范围,另一项是评估我们团队首次设计的视觉分析任务的现场和现场感知性能。我们的研究结果让我们了解了团队优先特征的表现,并揭示了它们在团队协调方面的优势。
{"title":"Supporting Team-First Visual Analytics through Group Activity Representations","authors":"Sriram Karthik Badam, Zehua Zeng, Emily Wall, A. Endert, N. Elmqvist","doi":"10.20380/GI2017.26","DOIUrl":"https://doi.org/10.20380/GI2017.26","url":null,"abstract":"Collaborative visual analytics (CVA) involves sensemaking activities within teams of analysts based on coordination of work across team members, awareness of team activity, and communication of hypotheses, observations, and insights. We introduce a new type of CVA tools based on the notion of “team-first” visual analytics, where supporting the analytical process and needs of the entire team is the primary focus of the graphical user interface before that of the individual analysts. To this end, we present the design space and guidelines for team-first tools in terms of conveying analyst presence, focus, and activity within the interface. We then introduce InsightsDrive, a CVA tool for multidimensional data, that contains team-first features into the interface through group activity visualizations. This includes (1) in-situ representations that show the focus regions of all users integrated in the data visualizations themselves using color-coded selection shadows, as well as (2) ex-situ representations showing the data coverage of each analyst using multidimensional visual representations. We conducted two user studies, one with individual analysts to identify the affordances of different visual representations to inform data coverage, and the other to evaluate the performance of our team-first design with exsitu and in-situ awareness for visual analytic tasks. Our results give an understanding of the performance of our team-first features and unravel their advantages for team coordination.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"208-213"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45054117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments Ivy:探索基于空间的可视化程序设计来编写和理解智能环境
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.20
Barrett Ens, Fraser Anderson, Tovi Grossman, M. Annett, Pourang Irani, G. Fitzmaurice
The availability of embedded, digital systems has led to a multitude of interconnected sensors and actuators being distributed among smart objects and built environments. Programming and understanding the behaviors of such systems can be challenging given their inherent spatial nature. To explore how spatial and contextual information can facilitate the authoring of intelligent environments, we introduce Ivy, a spatially situated visual programming tool using immersive virtual reality. Ivy allows users to link smart objects, insert logic constructs, and visualize real-time data flows between real-world sensors and actuators. Initial feedback sessions show that participants of varying skill levels can successfully author and debug programs in example scenarios.
嵌入式数字系统的可用性导致大量互连的传感器和致动器分布在智能物体和建筑环境中。考虑到这些系统固有的空间性质,编程和理解它们的行为可能具有挑战性。为了探索空间和上下文信息如何促进智能环境的创作,我们介绍了Ivy,这是一种使用沉浸式虚拟现实的空间视觉编程工具。Ivy允许用户链接智能对象,插入逻辑结构,并可视化真实世界传感器和执行器之间的实时数据流。最初的反馈会议表明,不同技能水平的参与者可以在示例场景中成功地编写和调试程序。
{"title":"Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments","authors":"Barrett Ens, Fraser Anderson, Tovi Grossman, M. Annett, Pourang Irani, G. Fitzmaurice","doi":"10.20380/GI2017.20","DOIUrl":"https://doi.org/10.20380/GI2017.20","url":null,"abstract":"The availability of embedded, digital systems has led to a multitude of interconnected sensors and actuators being distributed among smart objects and built environments. Programming and understanding the behaviors of such systems can be challenging given their inherent spatial nature. To explore how spatial and contextual information can facilitate the authoring of intelligent environments, we introduce Ivy, a spatially situated visual programming tool using immersive virtual reality. Ivy allows users to link smart objects, insert logic constructs, and visualize real-time data flows between real-world sensors and actuators. Initial feedback sessions show that participants of varying skill levels can successfully author and debug programs in example scenarios.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"156-162"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42336514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
Real-time Rendering with Compressed Animated Light Fields 实时渲染与压缩动画光场
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.05
Babis Koniaris, Maggie Kosek, David Sinclair, Kenny Mitchell
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.
我们提出了一种端到端的解决方案,用于向用户呈现电影质量的动画图形,同时仍然允许自由视点头部运动提供的存在感。通过将离线渲染的电影内容转换为新颖的沉浸式表示,我们根据跟踪的头部姿势实时显示内容。对于每一帧,我们使用放置在潜在观看者位置附近的一组稀疏相机,每帧生成一组立方体映射图像(颜色和深度)。根据照明环境的复杂性,相机采用优化过程,使渲染数据以最小的冗余度最大限度地覆盖。我们分别压缩颜色和深度,引入了一种集成的空间和时间方案,以在GPU上为虚拟现实应用程序提供高性能。我们详细介绍了一种使用多视图光线投射和视图相关解压缩的实时渲染算法。通过对图像重建质量和性能的定量分析,证明了150:1及更高的压缩率。
{"title":"Real-time Rendering with Compressed Animated Light Fields","authors":"Babis Koniaris, Maggie Kosek, David Sinclair, Kenny Mitchell","doi":"10.20380/GI2017.05","DOIUrl":"https://doi.org/10.20380/GI2017.05","url":null,"abstract":"We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame, we generate a set of cubemap images per frame (colors and depths) using a sparse set of of cameras placed in the vicinity of the potential viewer locations. The cameras are placed with an optimization process so that the rendered data maximise coverage with minimum redundancy, depending on the lighting environment complexity. We compress the colors and depths separately, introducing an integrated spatial and temporal scheme tailored to high performance on GPUs for Virtual Reality applications. We detail a real-time rendering algorithm using multi-view ray casting and view dependent decompression. Compression rates of 150:1 and greater are demonstrated with quantitative analysis of image reconstruction quality and performance.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"33-40"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43429659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat 基于去识别特征的面部表情可视化增强文本聊天
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.25
Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang
The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.
在基于文本的聊天中缺乏可见性会阻碍交流,特别是当非语言线索对信息的产生和理解很有帮助时。然而,交流丰富的非语言线索,如面部表情,可能在技术上成本更高(例如,视频流对带宽的需求),在社交上不太可取(例如,通过视频披露其他个人和背景信息)。我们考虑如何通过支持人们传达面部表情而不损害文本交流中隐形的好处来平衡这种紧张关系。我们介绍KinChat,一个增强的文本聊天工具,集成了运动传感和2D图形可视化技术,在文本对话中传达关键面部特征的信息。我们进行了两项研究,以检验与使用原始和模糊处理视频的其他技术相比,KinChat如何影响面部线索的去识别和意识,以及它对实时文本聊天的影响。我们表明,基于特征的面部表情可视化可以同时保持对面部线索和不可识别性的意识,从而更好地理解和减少焦虑。
{"title":"De-Identified Feature-based Visualization of Facial Expression for Enhanced Text Chat","authors":"Shuo-Ping Wang, Mei-Ling Chen, Hao-Chuan Wang, Chien-Tung Lai, A. Huang","doi":"10.20380/GI2017.25","DOIUrl":"https://doi.org/10.20380/GI2017.25","url":null,"abstract":"The lack of visibility in text-based chat can hinder communication, especially when nonverbal cues are instrumental to the production and understanding of messages. However, communicating rich nonverbal cues such as facial expressions may be technologically more costly (e.g., demand of bandwidth for video streaming) and socially less desirable (e.g., disclosing other personal and context information through video). We consider how to balance the tension by supporting people to convey facial expressions without compromising the benefits of invisibility in text communication. We present KinChat, an enhanced text chat tool that integrates motion sensing and 2D graphical visualization as a technique to convey information of key facial features during text conversations. We conducted two studies to examine how KinChat influences the de-identification and awareness of facial cues in comparison to other techniques using raw and blurring-processed videos, as well as its impact on real-time text chat. We show that feature-based visualization of facial expression can preserve both awareness of facial cues and non-identifiability at the same time, leading to better understanding and reduced anxiety.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"199-207"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45822351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Animating Multiple Escape Maneuvers for a School of Fish 一群鱼的多重逃生动作动画
Pub Date : 2017-06-01 DOI: 10.20380/GI2017.18
Sahithi Podila, Ying Zhu
A school of fish exhibit a variety of distinctive maneuvers to escape from predators. For example, they adopt avoid, compact, and inspection maneuvers when predators are nearby, use skitter or fast avoid maneuvers when predators chase them, or exhibit fountain, split, and flash maneuvers when predators attack them. Although these escape maneuvers have long been studied in biology and ecology, they have not been sufficiently modeled in computer graphics. Previous works on fish animation only provided simple escape behavior, lacking variety. The classic boids models do not include escape behavior. In this paper, we propose a behavioral model to simulate a variety of fish escape behavior in reaction to a single predator. Based on biological studies, our model can simulate common escape maneuvers such as compact, inspection, avoid, fountain, and flash. We demonstrate our results with simulations of predator attacks.
一群鱼类展示了各种独特的策略来躲避捕食者。例如,当捕食者在附近时,它们会采取躲避、紧凑和检查的动作,当捕食者追赶它们时,它们使用滑跑或快速躲避的动作,或者当捕食者攻击它们时,他们会表现出喷泉、分裂和闪光的动作。尽管这些逃跑策略在生物学和生态学中已经研究了很长时间,但它们在计算机图形学中还没有得到充分的建模。以前的鱼类动画作品只提供了简单的逃跑行为,缺乏多样性。经典的boids模型不包括转义行为。在本文中,我们提出了一个行为模型来模拟各种鱼类对单个捕食者的逃跑行为。基于生物学研究,我们的模型可以模拟常见的逃生动作,如紧凑、检查、躲避、喷泉和闪光。我们用捕食者攻击的模拟来证明我们的结果。
{"title":"Animating Multiple Escape Maneuvers for a School of Fish","authors":"Sahithi Podila, Ying Zhu","doi":"10.20380/GI2017.18","DOIUrl":"https://doi.org/10.20380/GI2017.18","url":null,"abstract":"A school of fish exhibit a variety of distinctive maneuvers to escape from predators. For example, they adopt avoid, compact, and inspection maneuvers when predators are nearby, use skitter or fast avoid maneuvers when predators chase them, or exhibit fountain, split, and flash maneuvers when predators attack them. Although these escape maneuvers have long been studied in biology and ecology, they have not been sufficiently modeled in computer graphics. Previous works on fish animation only provided simple escape behavior, lacking variety. The classic boids models do not include escape behavior. In this paper, we propose a behavioral model to simulate a variety of fish escape behavior in reaction to a single predator. Based on biological studies, our model can simulate common escape maneuvers such as compact, inspection, avoid, fountain, and flash. We demonstrate our results with simulations of predator attacks.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"140-147"},"PeriodicalIF":0.0,"publicationDate":"2017-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42291499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings. Graphics Interface (Conference)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1