首页 > 最新文献

Proceedings of the ACM on computer graphics and interactive techniques最新文献

英文 中文
Interactive simulation of plume and pyroclastic volcanic ejections 羽流和火山碎屑喷发的交互模拟
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522609
Maud Lastic, D. Rohmer, G. Cordonnier, C. Jaupart, Fabrice Neyret, Marie-Paule Cani
We propose an interactive animation method for the ejection of gas and ashes mixtures in volcano eruption. Our novel, layered solution combines a coarse-grain, physically-based simulation of the ejection dynamics with a consistent, procedural animation of multi-resolution details. We show that this layered model can be used to capture the two main types of ejection, namely ascending plume columns composed of rapidly rising gas carrying ash which progressively entrains more air, and pyroclastic flows which descend the slopes of the volcano depositing ash, ultimately leading to smaller plumes along their way. We validate the large-scale consistency of our model through comparison with geoscience data, and discuss both real-time visualization and off-line, realistic rendering.
我们提出了一种用于火山喷发中气体和灰烬混合物喷射的交互式动画方法。我们新颖的分层解决方案将弹射动力学的粗粒度、基于物理的模拟与多分辨率细节的一致程序动画相结合。我们表明,这种分层模型可以用于捕捉两种主要类型的喷出,即由快速上升的携带气体的火山灰组成的上升羽流柱,它逐渐携带更多的空气,以及沿火山斜坡下降的火山碎屑流,沉积火山灰,最终导致沿途羽流变小。通过与地球科学数据的比较,我们验证了我们的模型的大规模一致性,并讨论了实时可视化和离线真实绘制。
{"title":"Interactive simulation of plume and pyroclastic volcanic ejections","authors":"Maud Lastic, D. Rohmer, G. Cordonnier, C. Jaupart, Fabrice Neyret, Marie-Paule Cani","doi":"10.1145/3522609","DOIUrl":"https://doi.org/10.1145/3522609","url":null,"abstract":"We propose an interactive animation method for the ejection of gas and ashes mixtures in volcano eruption. Our novel, layered solution combines a coarse-grain, physically-based simulation of the ejection dynamics with a consistent, procedural animation of multi-resolution details. We show that this layered model can be used to capture the two main types of ejection, namely ascending plume columns composed of rapidly rising gas carrying ash which progressively entrains more air, and pyroclastic flows which descend the slopes of the volcano depositing ash, ultimately leading to smaller plumes along their way. We validate the large-scale consistency of our model through comparison with geoscience data, and discuss both real-time visualization and off-line, realistic rendering.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44740332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Relighting of Human Faces with a Low-Cost Setup 基于低成本设置的人脸实时重光照
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522626
Nejc Maček, B. Usta, E. Eisemann, R. Marroquim
Video-streaming services usually feature post-processing effects to replace the background. However, these often yield inconsistent lighting. Machine-learning-based relighting methods can address this problem, but, at real-time rates, are restricted to a low resolution and can result in an unrealistic skin appearance. Physically-based rendering techniques require complex skin models that can only be acquired using specialised equipment. Our method is lightweight and uses only a standard smartphone. By correcting imperfections during capture, we extract a convincing physically-based skin model. In combination with suitable acceleration techniques, we achieve real-time rates on commodity hardware.
视频流媒体服务通常采用后期处理效果来代替背景。然而,这些经常产生不一致的照明。基于机器学习的重光照方法可以解决这个问题,但是,在实时速率下,它被限制在低分辨率,并且可能导致不现实的皮肤外观。基于物理的渲染技术需要复杂的皮肤模型,只能使用专门的设备获得。我们的方法很轻,只需要一个标准的智能手机。通过纠正捕获过程中的缺陷,我们提取了一个令人信服的基于物理的皮肤模型。结合合适的加速技术,我们可以在普通硬件上实现实时速率。
{"title":"Real-Time Relighting of Human Faces with a Low-Cost Setup","authors":"Nejc Maček, B. Usta, E. Eisemann, R. Marroquim","doi":"10.1145/3522626","DOIUrl":"https://doi.org/10.1145/3522626","url":null,"abstract":"Video-streaming services usually feature post-processing effects to replace the background. However, these often yield inconsistent lighting. Machine-learning-based relighting methods can address this problem, but, at real-time rates, are restricted to a low resolution and can result in an unrealistic skin appearance. Physically-based rendering techniques require complex skin models that can only be acquired using specialised equipment. Our method is lightweight and uses only a standard smartphone. By correcting imperfections during capture, we extract a convincing physically-based skin model. In combination with suitable acceleration techniques, we achieve real-time rates on commodity hardware.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46506994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Permutation Coding for Vertex-Blend Attribute Compression 顶点混合属性压缩的置换编码
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522607
Christoph Peters, Bastian Kuth, Quirin Meyer
Compression of vertex attributes is crucial to keep bandwidth requirements in real-time rendering low. We present a method that encodes any given number of blend attributes for skinning at a fixed bit rate while keeping the worst-case error small. Our method exploits that the blend weights are sorted. With this knowledge, no information is lost when the weights get shuffled. Our permutation coding thus encodes additional data, e.g. about bone indices, into the order of the weights. We also transform the weights linearly to ensure full coverage of the representable domain. Through a thorough error analysis, we arrive at a nearly optimal quantization scheme. Our method is fast enough to decode blend attributes in a vertex shader and also to encode them at runtime, e.g. in a compute shader. Our open source implementation supports up to 13 weights in up to 64 bits.
在实时渲染中,压缩顶点属性对于保持较低的带宽需求至关重要。我们提出了一种以固定比特率编码任意给定数量的混合属性用于蒙皮的方法,同时保持最坏情况下的小错误。我们的方法利用了混合权值排序。有了这些知识,当权重被洗牌时就不会有信息丢失。因此,我们的排列编码将额外的数据,例如关于骨指数,编码成权重的顺序。我们还对权重进行了线性变换,以确保完全覆盖可表示域。通过彻底的误差分析,我们得到了一个近乎最优的量化方案。我们的方法足够快,可以在顶点着色器中解码混合属性,也可以在运行时编码它们,例如在计算着色器中。我们的开源实现支持最多64位的13个权重。
{"title":"Permutation Coding for Vertex-Blend Attribute Compression","authors":"Christoph Peters, Bastian Kuth, Quirin Meyer","doi":"10.1145/3522607","DOIUrl":"https://doi.org/10.1145/3522607","url":null,"abstract":"Compression of vertex attributes is crucial to keep bandwidth requirements in real-time rendering low. We present a method that encodes any given number of blend attributes for skinning at a fixed bit rate while keeping the worst-case error small. Our method exploits that the blend weights are sorted. With this knowledge, no information is lost when the weights get shuffled. Our permutation coding thus encodes additional data, e.g. about bone indices, into the order of the weights. We also transform the weights linearly to ensure full coverage of the representable domain. Through a thorough error analysis, we arrive at a nearly optimal quantization scheme. Our method is fast enough to decode blend attributes in a vertex shader and also to encode them at runtime, e.g. in a compute shader. Our open source implementation supports up to 13 weights in up to 64 bits.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41676116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo-consistent screen-space ambient occlusion 立体一致的屏幕空间环境遮挡
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522614
Pei-Bei Shi, M. Billeter, E. Eisemann
Screen-space ambient occlusion (SSAO) shows high efficiency and is widely used in real-time 3D applications. However, using SSAO algorithms in stereo rendering can lead to inconsistencies due to the differences in the screen-space information captured by the left and right eye. This will affect the perception of the scene and may be a source of viewer discomfort. In this paper, we show that the raw obscurance estimation part and subsequent filtering are both sources of inconsistencies. We developed a screen-space method involving both views in conjunction, leading to a stereo-aware raw obscurance estimation method and a stereo-aware bilateral filter. The results show that our method reduces stereo inconsistencies to a level comparable to geometry-based AO solutions, while maintaining the performance benefits of a screen-space approach.
屏幕空间环境遮挡(SSAO)具有很高的效率,在实时3D应用中得到了广泛的应用。然而,在立体渲染中使用SSAO算法可能会由于左眼和右眼捕获的屏幕空间信息的差异而导致不一致。这将影响对场景的感知,并且可能是观看者不适的来源。在本文中,我们证明了原始模糊度估计部分和随后的滤波都是不一致的来源。我们开发了一种将两个视图结合使用的屏幕空间方法,从而产生了一种立体声感知的原始模糊度估计方法和立体声感知的双边滤波器。结果表明,我们的方法将立体不一致性降低到与基于几何的AO解决方案相当的水平,同时保持了屏幕空间方法的性能优势。
{"title":"Stereo-consistent screen-space ambient occlusion","authors":"Pei-Bei Shi, M. Billeter, E. Eisemann","doi":"10.1145/3522614","DOIUrl":"https://doi.org/10.1145/3522614","url":null,"abstract":"Screen-space ambient occlusion (SSAO) shows high efficiency and is widely used in real-time 3D applications. However, using SSAO algorithms in stereo rendering can lead to inconsistencies due to the differences in the screen-space information captured by the left and right eye. This will affect the perception of the scene and may be a source of viewer discomfort. In this paper, we show that the raw obscurance estimation part and subsequent filtering are both sources of inconsistencies. We developed a screen-space method involving both views in conjunction, leading to a stereo-aware raw obscurance estimation method and a stereo-aware bilateral filter. The results show that our method reduces stereo inconsistencies to a level comparable to geometry-based AO solutions, while maintaining the performance benefits of a screen-space approach.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44285886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Interactive Physics-Based Virtual Sculpting with Haptic Feedback 基于触觉反馈的交互式物理虚拟雕刻
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522611
Avirup Mandal, P. Chaudhuri, S. Chaudhuri
Sculpting is an art form that relies on both the visual and tactile senses. A faithful simulation of sculpting, therefore, requires interactive, physically accurate haptic and visual feedback. We present an interactive physics-based sculpting framework with faithful haptic feedback. We enable cutting of the material by designing a stable, remeshing-free cutting algorithm called Improved stable eXtended Finite Element Method. We present a simulation framework to enable stable visual and haptic feedback at interactive rates. We evaluate the performance of our framework quantitatively and quantitatively through an extensive user study.
雕刻是一种依靠视觉和触觉的艺术形式。因此,雕刻的忠实模拟需要互动,物理上准确的触觉和视觉反馈。我们提出了一个互动的基于物理的雕刻框架与忠实的触觉反馈。我们通过设计一种称为改进稳定扩展有限元法的稳定、无重网格切割算法来实现材料的切割。我们提出了一个模拟框架,以实现稳定的视觉和触觉反馈在交互速率。我们通过广泛的用户研究定量和定量地评估我们的框架的性能。
{"title":"Interactive Physics-Based Virtual Sculpting with Haptic Feedback","authors":"Avirup Mandal, P. Chaudhuri, S. Chaudhuri","doi":"10.1145/3522611","DOIUrl":"https://doi.org/10.1145/3522611","url":null,"abstract":"Sculpting is an art form that relies on both the visual and tactile senses. A faithful simulation of sculpting, therefore, requires interactive, physically accurate haptic and visual feedback. We present an interactive physics-based sculpting framework with faithful haptic feedback. We enable cutting of the material by designing a stable, remeshing-free cutting algorithm called Improved stable eXtended Finite Element Method. We present a simulation framework to enable stable visual and haptic feedback at interactive rates. We evaluate the performance of our framework quantitatively and quantitatively through an extensive user study.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 20"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47519145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real-Time Ray-Traced Soft Shadows of Environmental Lighting by Conical Ray Culling 锥形消隐法实时跟踪环境光的软阴影
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522617
Yang Xu, Yu Jiang, Junbo Zhang, Kang Li, Guohua Geng
Soft shadows of environmental lighting provide important visual cues in realistic rendering. However, rendering of soft shadows of environmental lighting in real-time is difficult because evaluating the visibility function is challenging. In this work, we present a method to render soft shadows of environmental lighting at real-time frame rates based on hardware-accelerated ray tracing. We assume that the scene contains both static and dynamic objects. To composite the soft shadows cast by dynamic objects with the precomputed lighting of static objects, the incident irradiance occluded by dynamic objects, which is obtained by accumulating the occluded incident radiances over the hemisphere using ray tracing, is subtracted from the precomputed incident irradiance. Conical ray culling is proposed to exclude the rays that cannot intersect dynamic objects, which significantly improves rendering efficiency. Rendering results demonstrate that our proposed method can achieve real-time rendering of soft shadows of environmental lighting cast by dynamic objects.
环境照明的柔和阴影在现实渲染中提供了重要的视觉线索。然而,环境照明的软阴影的实时渲染是困难的,因为评估能见度函数是具有挑战性的。在这项工作中,我们提出了一种基于硬件加速光线追踪的实时帧率渲染环境照明软阴影的方法。我们假设场景包含静态和动态对象。为了合成动态物体投射的软阴影与预先计算的静态物体的光照,在预先计算的入射辐照度中减去动态物体遮挡的入射辐照度,该入射辐照度是通过光线追踪在半球上累积遮挡的入射辐照度而得到的。提出了锥形射线剔除方法,排除了不能与动态物体相交的射线,显著提高了绘制效率。渲染结果表明,该方法可以实现动态物体投射的环境照明柔和阴影的实时渲染。
{"title":"Real-Time Ray-Traced Soft Shadows of Environmental Lighting by Conical Ray Culling","authors":"Yang Xu, Yu Jiang, Junbo Zhang, Kang Li, Guohua Geng","doi":"10.1145/3522617","DOIUrl":"https://doi.org/10.1145/3522617","url":null,"abstract":"Soft shadows of environmental lighting provide important visual cues in realistic rendering. However, rendering of soft shadows of environmental lighting in real-time is difficult because evaluating the visibility function is challenging. In this work, we present a method to render soft shadows of environmental lighting at real-time frame rates based on hardware-accelerated ray tracing. We assume that the scene contains both static and dynamic objects. To composite the soft shadows cast by dynamic objects with the precomputed lighting of static objects, the incident irradiance occluded by dynamic objects, which is obtained by accumulating the occluded incident radiances over the hemisphere using ray tracing, is subtracted from the precomputed incident irradiance. Conical ray culling is proposed to exclude the rays that cannot intersect dynamic objects, which significantly improves rendering efficiency. Rendering results demonstrate that our proposed method can achieve real-time rendering of soft shadows of environmental lighting cast by dynamic objects.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47924112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-Time Hair Filtering with Convolutional Neural Networks 基于卷积神经网络的头发实时滤波
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-04 DOI: 10.1145/3522606
Roc R. Currius, Ulf Assarsson, Erik Sintorn
Rendering of realistic-looking hair is in general still too costly to do in real-time applications, from simulating the physics to rendering the fine details required for it to look natural, including self-shadowing. We show how an autoencoder network, that can be evaluated in real time, can be trained to filter an image of few stochastic samples, including self-shadowing, to produce a much more detailed image that takes into account real hair thickness and transparency.
渲染逼真的头发通常仍然过于昂贵,无法在实时应用程序中进行,从模拟物理到渲染精细细节,使其看起来自然,包括自阴影。我们展示了如何训练可以实时评估的自动编码器网络来过滤少量随机样本的图像,包括自阴影,以产生考虑到真实头发厚度和透明度的更详细的图像。
{"title":"Real-Time Hair Filtering with Convolutional Neural Networks","authors":"Roc R. Currius, Ulf Assarsson, Erik Sintorn","doi":"10.1145/3522606","DOIUrl":"https://doi.org/10.1145/3522606","url":null,"abstract":"Rendering of realistic-looking hair is in general still too costly to do in real-time applications, from simulating the physics to rendering the fine details required for it to look natural, including self-shadowing. We show how an autoencoder network, that can be evaluated in real time, can be trained to filter an image of few stochastic samples, including self-shadowing, to produce a much more detailed image that takes into account real hair thickness and transparency.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2022-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45316264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Rethinking Model-Based Gaze Estimation. 重新思考基于模型的注视估计。
IF 1.4 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-01 Epub Date: 2022-05-17 DOI: 10.1145/3530797
Harsimran Kaur, Swati Jindal, Roberto Manduchi

Over the past several years, a number of data-driven gaze tracking algorithms have been proposed, which have been shown to outperform classic model-based methods in terms of gaze direction accuracy. These algorithms leverage the recent development of sophisticated CNN architectures, as well as the availability of large gaze datasets captured under various conditions. One shortcoming of black-box, end-to-end methods, though, is that any unexpected behaviors are difficult to explain. In addition, there is always the risk that a system trained with a certain dataset may not perform well when tested on data from a different source (the "domain gap" problem.) In this work, we propose a novel method to embed eye geometry information in an end-to-end gaze estimation network by means of a "geometric layer". Our experimental results show that our system outperforms other state-of-the-art methods in cross-dataset evaluation, while producing competitive performance over within dataset tests. In addition, the proposed system is able to extrapolate gaze angles outside the range of those considered in the training data.

在过去几年中,人们提出了许多数据驱动的注视跟踪算法,这些算法在注视方向的准确性方面已被证明优于传统的基于模型的方法。这些算法充分利用了近年来复杂的 CNN 架构的发展,以及在各种条件下捕获的大型凝视数据集的可用性。不过,黑盒端到端方法的一个缺点是,任何意外行为都很难解释。此外,使用特定数据集训练出来的系统在不同来源的数据上进行测试时,可能总是存在表现不佳的风险("领域差距 "问题)。在这项工作中,我们提出了一种新方法,通过 "几何层 "将眼球几何信息嵌入端到端凝视估计网络。实验结果表明,我们的系统在跨数据集评估中的表现优于其他最先进的方法,同时在数据集内部测试中的表现也很有竞争力。此外,所提出的系统还能推断出训练数据所考虑的角度范围之外的注视角度。
{"title":"Rethinking Model-Based Gaze Estimation.","authors":"Harsimran Kaur, Swati Jindal, Roberto Manduchi","doi":"10.1145/3530797","DOIUrl":"10.1145/3530797","url":null,"abstract":"<p><p>Over the past several years, a number of data-driven gaze tracking algorithms have been proposed, which have been shown to outperform classic model-based methods in terms of gaze direction accuracy. These algorithms leverage the recent development of sophisticated CNN architectures, as well as the availability of large gaze datasets captured under various conditions. One shortcoming of black-box, end-to-end methods, though, is that any unexpected behaviors are difficult to explain. In addition, there is always the risk that a system trained with a certain dataset may not perform well when tested on data from a different source (the \"domain gap\" problem.) In this work, we propose a novel method to embed eye geometry information in an end-to-end gaze estimation network by means of a \"geometric layer\". Our experimental results show that our system outperforms other state-of-the-art methods in cross-dataset evaluation, while producing competitive performance over within dataset tests. In addition, the proposed system is able to extrapolate gaze angles outside the range of those considered in the training data.</p>","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 2","pages":""},"PeriodicalIF":1.4,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9231508/pdf/nihms-1800583.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40397122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Spiral into the Mind 进入心灵的螺旋
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-04-28 DOI: 10.1145/3530795
Maurice Koch, D. Weiskopf, K. Kurzhals
Comparing mobile eye tracking data from multiple participants without information about areas of interest (AOIs) is challenging because of individual timing and coordinate systems. We present a technique, the gaze spiral, that visualizes individual recordings based on image content of the stimulus. The spiral layout of the slitscan visualization is used to create a compact representation of scanpaths. The visualization provides an overview of multiple recordings even for long time spans and helps identify and annotate recurring patterns within recordings. The gaze spirals can also serve as glyphs that can be projected to 2D space based on established scanpath metrics in order to interpret the metrics and identify groups of similar viewing behavior. We present examples based on two egocentric datasets to demonstrate the effectiveness of our approach for annotation and comparison tasks. Our examples show that the technique has the potential to let users compare even long-term recordings of pervasive scenarios without manual annotation.
由于不同的时间和坐标系统,在没有感兴趣区域(AOI)信息的情况下比较来自多个参与者的移动眼睛跟踪数据是具有挑战性的。我们提出了一种技术,凝视螺旋,根据刺激的图像内容可视化个人记录。狭缝扫描可视化的螺旋布局用于创建扫描路径的紧凑表示。可视化提供了多个记录的概览,即使是长时间跨度的记录,并有助于识别和注释记录中的重复模式。凝视螺旋还可以用作可以基于所建立的扫描路径度量投影到2D空间的字形,以便解释度量并识别相似观看行为的组。我们给出了基于两个以自我为中心的数据集的例子,以证明我们的方法在注释和比较任务中的有效性。我们的例子表明,该技术有潜力让用户在没有手动注释的情况下,甚至可以比较普遍场景的长期记录。
{"title":"A Spiral into the Mind","authors":"Maurice Koch, D. Weiskopf, K. Kurzhals","doi":"10.1145/3530795","DOIUrl":"https://doi.org/10.1145/3530795","url":null,"abstract":"Comparing mobile eye tracking data from multiple participants without information about areas of interest (AOIs) is challenging because of individual timing and coordinate systems. We present a technique, the gaze spiral, that visualizes individual recordings based on image content of the stimulus. The spiral layout of the slitscan visualization is used to create a compact representation of scanpaths. The visualization provides an overview of multiple recordings even for long time spans and helps identify and annotate recurring patterns within recordings. The gaze spirals can also serve as glyphs that can be projected to 2D space based on established scanpath metrics in order to interpret the metrics and identify groups of similar viewing behavior. We present examples based on two egocentric datasets to demonstrate the effectiveness of our approach for annotation and comparison tasks. Our examples show that the technique has the potential to let users compare even long-term recordings of pervasive scenarios without manual annotation.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42151453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Software Rasterization of 2 Billion Points in Real Time 20亿个实时点的软件栅格化
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-04-04 DOI: 10.1145/3543863
Markus Schütz, B. Kerbl, M. Wimmer
The accelerated collection of detailed real-world 3D data in the form of ever-larger point clouds is sparking a demand for novel visualization techniques that are capable of rendering billions of point primitives in real-time. We propose a software rasterization pipeline for point clouds that is capable of rendering up to two billion points in real-time (60 FPS) on commodity hardware. Improvements over the state of the art are achieved by batching points, enabling a number of batch-level optimizations before rasterizing them within the same rendering pass. These optimizations include frustum culling, level-of-detail (LOD) rendering, and choosing the appropriate coordinate precision for a given batch of points directly within a compute workgroup. Adaptive coordinate precision, in conjunction with visibility buffers, reduces the required data for the majority of points to just four bytes, making our approach several times faster than the bandwidth-limited state of the art. Furthermore, support for LOD rendering makes our software rasterization approach suitable for rendering arbitrarily large point clouds, and to meet the elevated performance demands of virtual reality applications.
以越来越大的点云的形式加速收集详细的现实世界3D数据,引发了对能够实时渲染数十亿个点原语的新型可视化技术的需求。我们提出了一个点云的软件光栅化管道,能够在商用硬件上实时渲染多达20亿个点(60 FPS)。对现有技术的改进是通过批处理点来实现的,在同一渲染通道内对它们进行栅格化之前,可以进行许多批处理级优化。这些优化包括截锥体剔除、细节级别(LOD)呈现,以及直接在计算工作组中为给定的一批点选择适当的坐标精度。自适应坐标精度与可见性缓冲区相结合,将大多数点所需的数据减少到只有四个字节,使我们的方法比带宽有限的技术快几倍。此外,对LOD渲染的支持使得我们的软件栅格化方法适合于渲染任意大的点云,并满足虚拟现实应用对性能的更高要求。
{"title":"Software Rasterization of 2 Billion Points in Real Time","authors":"Markus Schütz, B. Kerbl, M. Wimmer","doi":"10.1145/3543863","DOIUrl":"https://doi.org/10.1145/3543863","url":null,"abstract":"The accelerated collection of detailed real-world 3D data in the form of ever-larger point clouds is sparking a demand for novel visualization techniques that are capable of rendering billions of point primitives in real-time. We propose a software rasterization pipeline for point clouds that is capable of rendering up to two billion points in real-time (60 FPS) on commodity hardware. Improvements over the state of the art are achieved by batching points, enabling a number of batch-level optimizations before rasterizing them within the same rendering pass. These optimizations include frustum culling, level-of-detail (LOD) rendering, and choosing the appropriate coordinate precision for a given batch of points directly within a compute workgroup. Adaptive coordinate precision, in conjunction with visibility buffers, reduces the required data for the majority of points to just four bytes, making our approach several times faster than the bandwidth-limited state of the art. Furthermore, support for LOD rendering makes our software rasterization approach suitable for rendering arbitrarily large point clouds, and to meet the elevated performance demands of virtual reality applications.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41687153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
期刊
Proceedings of the ACM on computer graphics and interactive techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1