首页 > 最新文献

Graphical Models最新文献

英文 中文
Component-aware generative autoencoder for structure hybrid and shape completion 面向结构混合和形状补全的组件感知生成式自编码器
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101185
Fan Zhang, Qiang Fu, Yang Liu, Xueming Li

Assembling components of man-made objects to create new structures or complete 3D shapes is a popular approach in 3D modeling techniques. Recently, leveraging deep neural networks for assembly-based 3D modeling has been widely studied. However, exploring new component combinations even across different categories is still challenging for most of the deep-learning-based 3D modeling methods. In this paper, we propose a novel generative autoencoder that tackles the component combinations for 3D modeling of man-made objects. We use the segmented input objects to create component volumes that have redundant components and random configurations. By using the input objects and the associated component volumes to train the autoencoder, we can obtain an object volume consisting of components with proper quality and structure as the network output. Such a generative autoencoder can be applied to either multiple object categories for structure hybrid or a single object category for shape completion. We conduct a series of evaluations and experimental results to demonstrate the usability and practicability of our method.

组装人造物体的组件以创建新的结构或完整的3D形状是3D建模技术中的一种流行方法。近年来,利用深度神经网络进行基于装配的三维建模得到了广泛的研究。然而,对于大多数基于深度学习的3D建模方法来说,探索跨不同类别的新组件组合仍然具有挑战性。在本文中,我们提出了一种新的生成式自编码器,用于解决人造物体三维建模的组件组合问题。我们使用分段输入对象来创建具有冗余组件和随机配置的组件卷。通过使用输入对象和相关联的分量体积对自编码器进行训练,我们可以得到一个由质量和结构合适的分量组成的对象体积作为网络输出。这种生成式自编码器既可以应用于多对象类别进行结构混合,也可以应用于单对象类别进行形状补全。我们进行了一系列的评估和实验结果,以证明我们的方法的可用性和实用性。
{"title":"Component-aware generative autoencoder for structure hybrid and shape completion","authors":"Fan Zhang,&nbsp;Qiang Fu,&nbsp;Yang Liu,&nbsp;Xueming Li","doi":"10.1016/j.gmod.2023.101185","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101185","url":null,"abstract":"<div><p>Assembling components of man-made objects to create new structures or complete 3D shapes is a popular approach in 3D modeling techniques. Recently, leveraging deep neural networks for assembly-based 3D modeling has been widely studied. However, exploring new component combinations even across different categories is still challenging for most of the deep-learning-based 3D modeling methods. In this paper, we propose a novel generative autoencoder that tackles the component combinations for 3D modeling of man-made objects. We use the segmented input objects to create component volumes that have redundant components and random configurations. By using the input objects and the associated component volumes to train the autoencoder, we can obtain an object volume consisting of components with proper quality and structure as the network output. Such a generative autoencoder can be applied to either multiple object categories for structure hybrid or a single object category for shape completion. We conduct a series of evaluations and experimental results to demonstrate the usability and practicability of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101185"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-homogeneous denoising for virtual reality in real-time path tracing rendering 实时路径跟踪绘制中虚拟现实的非均匀去噪
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101184
Victor Peres , Esteban Clua , Thiago Porcino , Anselmo Montenegro

Real time Path-tracing is becoming an important approach for the future of games, digital entertainment, and virtual reality applications that require realism and immersive environments. Among different possible optimizations, denoising Monte Carlo rendered images is necessary in low sampling densities. When dealing with Virtual Reality devices, other possibilities can also be considered, such as foveated rendering techniques. Hence, this work proposes a novel and promising rendering pipeline for denoising a real-time path-traced application in a dual-screen system such as head-mounted display (HMD) devices. Therefore, we leverage characteristics of the foveal vision by computing G-Buffers with the features of the scene and a buffer with the foveated distribution for both left and right screens. Later, we path trace the image within the coordinates buffer generating only a few initial rays per selected pixel, and reconstruct the noisy image output with a novel non-homogeneous denoiser that accounts for the pixel distribution. Our experiments showed that this proposed rendering pipeline could achieve a speedup factor up to 1.35 compared to one without our optimizations.

实时路径追踪正在成为未来游戏、数字娱乐和虚拟现实应用的重要方法,这些应用需要现实主义和沉浸式环境。在各种可能的优化中,在低采样密度下对蒙特卡罗渲染图像去噪是必要的。在处理虚拟现实设备时,还可以考虑其他可能性,例如注视点渲染技术。因此,这项工作提出了一种新颖而有前途的渲染管道,用于在双屏幕系统(如头戴式显示器(HMD)设备)中对实时路径跟踪应用进行去噪。因此,我们通过计算具有场景特征的G-Buffers和具有左右屏幕注视点分布的缓冲区来利用中央凹视觉的特征。随后,我们在坐标缓冲区内对图像进行路径跟踪,每个选定的像素只产生少量初始射线,并使用考虑像素分布的新型非均匀去噪器重建噪声图像输出。我们的实验表明,与没有优化的情况下相比,这个提议的渲染管道可以实现高达1.35的加速因子。
{"title":"Non-homogeneous denoising for virtual reality in real-time path tracing rendering","authors":"Victor Peres ,&nbsp;Esteban Clua ,&nbsp;Thiago Porcino ,&nbsp;Anselmo Montenegro","doi":"10.1016/j.gmod.2023.101184","DOIUrl":"10.1016/j.gmod.2023.101184","url":null,"abstract":"<div><p>Real time Path-tracing is becoming an important approach for the future of games, digital entertainment, and virtual reality applications that require realism and immersive environments. Among different possible optimizations, denoising Monte Carlo rendered images is necessary in low sampling densities. When dealing with Virtual Reality devices, other possibilities can also be considered, such as foveated rendering techniques. Hence, this work proposes a novel and promising rendering pipeline for denoising a real-time path-traced application in a dual-screen system such as head-mounted display (HMD) devices. Therefore, we leverage characteristics of the foveal vision by computing G-Buffers with the features of the scene and a buffer with the foveated distribution for both left and right screens. Later, we path trace the image within the coordinates buffer generating only a few initial rays per selected pixel, and reconstruct the noisy image output with a novel non-homogeneous denoiser that accounts for the pixel distribution. Our experiments showed that this proposed rendering pipeline could achieve a speedup factor up to 1.35 compared to one without our optimizations.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101184"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43863090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Obituary: Christoph M. Hoffmann 讣告:Christoph M. Hoffmann
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101186
{"title":"Obituary: Christoph M. Hoffmann","authors":"","doi":"10.1016/j.gmod.2023.101186","DOIUrl":"10.1016/j.gmod.2023.101186","url":null,"abstract":"","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101186"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48312779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ImplicitPCA: Implicitly-proxied parametric encoding for collision-aware garment reconstruction 用于碰撞感知服装重构的隐式代理参数编码
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101195
Lan Chen , Jie Yang , Hongbo Fu , Xiaoxu Meng , Weikai Chen , Bo Yang , Lin Gao

The emerging remote collaboration in a virtual environment calls for quickly generating high-fidelity 3D humans with cloth from a single image. To estimate clothing geometry and topology, parametric models are widely used but often lack details. Alternative approaches based on implicit functions can generate accurate details but are limited to closed surfaces and may not produce physically correct reconstructions, such as collision-free human avatars. To solve these problems, we present ImplicitPCA, a framework for high-fidelity single-view garment reconstruction that bridges the good ends of explicit and implicit representations. The key is a parametric SDF network that closely couples parametric encoding with implicit functions and thus enjoys the fine details brought by implicit reconstruction while maintaining correct topology with open surfaces. We further introduce a collision-aware regression network to ensure the physical correctness of cloth and human. During inference, an iterative routine is applied to an input image with 2D garment landmarks to obtain optimal parameters by aligning the cloth mesh projection with the 2D landmarks and fitting the parametric implicit fields with the reconstructed cloth SDF. The experiments on the public dataset and in-the-wild images demonstrate that our result outperforms the prior works, reconstructing detailed, topology-correct 3D garments while avoiding garment-body collisions.

虚拟环境中新兴的远程协作要求从单个图像中快速生成高保真的3D人体。为了估计服装的几何结构和拓扑结构,参数模型被广泛使用,但往往缺乏细节。基于隐式函数的替代方法可以生成准确的细节,但仅限于闭合表面,并且可能无法生成物理上正确的重建,例如无碰撞的人类化身。为了解决这些问题,我们提出了ImplicitPCA,这是一个用于高保真单视图服装重建的框架,它连接了显式和隐式表示的良好效果。关键是一个参数SDF网络,它将参数编码与隐式函数紧密耦合,从而在保持具有开放曲面的正确拓扑的同时,享受隐式重构带来的精细细节。我们进一步引入了一个碰撞感知回归网络,以确保布料和人的物理正确性。在推断过程中,迭代程序被应用于具有2D服装标志的输入图像,以通过将布网格投影与2D标志对齐并将参数隐式场与重建的布SDF拟合来获得最佳参数。在公共数据集和野生图像上的实验表明,我们的结果优于先前的工作,重建了详细的、拓扑正确的3D服装,同时避免了服装与身体的碰撞。
{"title":"ImplicitPCA: Implicitly-proxied parametric encoding for collision-aware garment reconstruction","authors":"Lan Chen ,&nbsp;Jie Yang ,&nbsp;Hongbo Fu ,&nbsp;Xiaoxu Meng ,&nbsp;Weikai Chen ,&nbsp;Bo Yang ,&nbsp;Lin Gao","doi":"10.1016/j.gmod.2023.101195","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101195","url":null,"abstract":"<div><p>The emerging remote collaboration in a virtual environment calls for quickly generating high-fidelity 3D humans with cloth from a single image. To estimate clothing geometry and topology, parametric models are widely used but often lack details. Alternative approaches based on implicit functions can generate accurate details but are limited to closed surfaces and may not produce physically correct reconstructions, such as collision-free human avatars. To solve these problems, we present <em>ImplicitPCA</em>, a framework for high-fidelity single-view garment reconstruction that bridges the good ends of explicit and implicit representations. The key is a parametric SDF network that closely couples parametric encoding with implicit functions and thus enjoys the fine details brought by implicit reconstruction while maintaining correct topology with open surfaces. We further introduce a collision-aware regression network to ensure the physical correctness of cloth and human. During inference, an iterative routine is applied to an input image with 2D garment landmarks to obtain optimal parameters by aligning the cloth mesh projection with the 2D landmarks and fitting the parametric implicit fields with the reconstructed cloth SDF. The experiments on the public dataset and in-the-wild images demonstrate that our result outperforms the prior works, reconstructing detailed, topology-correct 3D garments while avoiding garment-body collisions.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101195"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representing uncertainty through sentiment and stance visualizations: A survey 通过情绪和立场可视化来表示不确定性:一项调查
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101191
Bárbara Ramalho, Joaquim Jorge, Sandra Gama

Visual analytics combines automated analysis techniques with interactive visualizations for effective understanding, reasoning, and decision-making on complex data. However, accurately classifying sentiments and stances in sentiment analysis remains challenging due to ambiguity and individual differences. This survey examines 35 papers published between 2016 and 2022, identifying unaddressed sources of friction that contribute to a gap between individual sentiment, processed data, and visual representation. We explore the impact of visualizations on data perception, analyze existing techniques, and investigate the many facets of uncertainty in sentiment and stance visualizations. We also discuss the evaluation methods used and present opportunities for future research. Our work addresses a gap in previous surveys by focusing on uncertainty and the visualization of sentiment and stance, providing valuable insights for researchers in graphical models, computational methods, and information visualization.

可视化分析将自动分析技术与交互式可视化相结合,以便对复杂数据进行有效的理解、推理和决策。然而,由于歧义和个体差异,在情感分析中准确分类情感和立场仍然具有挑战性。本调查分析了2016年至2022年间发表的35篇论文,找出了导致个人情绪、处理过的数据和视觉表现之间存在差距的未解决的摩擦来源。我们探讨了可视化对数据感知的影响,分析了现有的技术,并研究了情绪和立场可视化中不确定性的许多方面。我们还讨论了所使用的评估方法,并提出了未来研究的机会。我们的工作通过关注不确定性和情绪和立场的可视化解决了以前调查的空白,为图形模型、计算方法和信息可视化方面的研究人员提供了有价值的见解。
{"title":"Representing uncertainty through sentiment and stance visualizations: A survey","authors":"Bárbara Ramalho,&nbsp;Joaquim Jorge,&nbsp;Sandra Gama","doi":"10.1016/j.gmod.2023.101191","DOIUrl":"10.1016/j.gmod.2023.101191","url":null,"abstract":"<div><p>Visual analytics combines automated analysis techniques with interactive visualizations for effective understanding, reasoning, and decision-making on complex data. However, accurately classifying sentiments and stances in sentiment analysis remains challenging due to ambiguity and individual differences. This survey examines 35 papers published between 2016 and 2022, identifying unaddressed sources of friction that contribute to a gap between individual sentiment, processed data, and visual representation. We explore the impact of visualizations on data perception, analyze existing techniques, and investigate the many facets of uncertainty in sentiment and stance visualizations. We also discuss the evaluation methods used and present opportunities for future research. Our work addresses a gap in previous surveys by focusing on uncertainty and the visualization of sentiment and stance, providing valuable insights for researchers in graphical models, computational methods, and information visualization.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101191"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47783325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GIM3D plus: A labeled 3D dataset to design data-driven solutions for dressed humans GIM3D plus:一个标记的3D数据集,为穿着的人设计数据驱动的解决方案
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101187
Pietro Musoni , Simone Melzi , Umberto Castellani

Segmentation and classification of clothes in real 3D data are particularly challenging due to the extreme variation of their shapes, even among the same cloth category, induced by the underlying human subject. Several data-driven methods try to cope with this problem. Still, they must face the lack of available data to generalize to various real-world instances. For this reason, we present GIM3D plus (Garments In Motion 3D plus), a synthetic dataset of clothed 3D human characters in different poses. A physical simulation of clothes generates the over 5000 3D models in this dataset with different fabrics, sizes, and tightness, using animated human avatars representing different subjects in diverse poses. Our dataset comprises single meshes created to simulate 3D scans, with labels for the separate clothes and the visible body parts. We also provide an evaluation of the use of GIM3D plus as a training set on garment segmentation and classification tasks using state-of-the-art data-driven methods for both meshes and point clouds.

在真实的3D数据中对衣服进行分割和分类尤其具有挑战性,因为即使在同一布料类别中,它们的形状也会因潜在的人体主体而发生极端变化。有几种数据驱动的方法试图解决这个问题。然而,他们必须面对缺乏可用数据来归纳各种现实世界实例的问题。出于这个原因,我们提出了GIM3D plus(运动中的服装3D plus),这是一个不同姿势的服装3D人类角色的合成数据集。服装的物理模拟在这个数据集中生成了5000多个具有不同面料、尺寸和松紧度的3D模型,使用动画的人类化身代表不同的主体,以不同的姿势。我们的数据集包括模拟3D扫描的单个网格,并为单独的衣服和可见的身体部位贴上标签。我们还提供了使用GIM3D plus作为服装分割和分类任务的训练集的评估,使用最先进的数据驱动方法用于网格和点云。
{"title":"GIM3D plus: A labeled 3D dataset to design data-driven solutions for dressed humans","authors":"Pietro Musoni ,&nbsp;Simone Melzi ,&nbsp;Umberto Castellani","doi":"10.1016/j.gmod.2023.101187","DOIUrl":"10.1016/j.gmod.2023.101187","url":null,"abstract":"<div><p>Segmentation and classification of clothes in real 3D data are particularly challenging due to the extreme variation of their shapes, even among the same cloth category, induced by the underlying human subject. Several data-driven methods try to cope with this problem. Still, they must face the lack of available data to generalize to various real-world instances. For this reason, we present GIM3D plus (Garments In Motion 3D plus), a synthetic dataset of clothed 3D human characters in different poses. A physical simulation of clothes generates the over 5000 3D models in this dataset with different fabrics, sizes, and tightness, using animated human avatars representing different subjects in diverse poses. Our dataset comprises single meshes created to simulate 3D scans, with labels for the separate clothes and the visible body parts. We also provide an evaluation of the use of GIM3D plus as a training set on garment segmentation and classification tasks using state-of-the-art data-driven methods for both meshes and point clouds.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101187"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45694870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural style transfer for 3D meshes 3D网格的神经风格转移
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101198
Hongyuan Kang , Xiao Dong , Juan Cao , Zhonggui Chen

Style transfer is a popular research topic in the field of computer vision. In 3D stylization, a mesh model is deformed to achieve a specific geometric style. We explore a general neural style transfer framework for 3D meshes that can transfer multiple geometric styles from other meshes to the current mesh. Our stylization network is based on a pre-trained MeshNet model, from which content representation and Gram-based style representation are extracted. By constraining the similarity in content and style representation between the generated mesh and two different meshes, our network can generate a deformed mesh with a specific style while maintaining the content of the original mesh. Experiments verify the robustness of the proposed network and show the effectiveness of stylizing multiple models with one dedicated style mesh. We also conduct ablation experiments to analyze the effectiveness of our network.

风格转移是计算机视觉领域的一个热门研究课题。在三维样式化中,网格模型会变形以实现特定的几何样式。我们探索了一种用于3D网格的通用神经样式传递框架,该框架可以将多种几何样式从其他网格传递到当前网格。我们的风格化网络基于预先训练的MeshNet模型,从中提取内容表示和基于Gram的风格表示。通过约束生成的网格和两个不同网格之间内容和样式表示的相似性,我们的网络可以生成具有特定样式的变形网格,同时保持原始网格的内容。实验验证了所提出的网络的稳健性,并表明了用一个专用样式网格对多个模型进行样式化的有效性。我们还进行了消融实验来分析我们网络的有效性。
{"title":"Neural style transfer for 3D meshes","authors":"Hongyuan Kang ,&nbsp;Xiao Dong ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2023.101198","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101198","url":null,"abstract":"<div><p>Style transfer is a popular research topic in the field of computer vision. In 3D stylization, a mesh model is deformed to achieve a specific geometric style. We explore a general neural style transfer framework for 3D meshes that can transfer multiple geometric styles from other meshes to the current mesh. Our stylization network is based on a pre-trained MeshNet model, from which content representation and Gram-based style representation are extracted. By constraining the similarity in content and style representation between the generated mesh and two different meshes, our network can generate a deformed mesh with a specific style while maintaining the content of the original mesh. Experiments verify the robustness of the proposed network and show the effectiveness of stylizing multiple models with one dedicated style mesh. We also conduct ablation experiments to analyze the effectiveness of our network.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101198"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSNet: Generating 3D garment animation via graph skinning network GSNet:通过图形蒙皮网络生成3D服装动画
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101197
Tao Peng , Jiewen Kuang , Jinxing Liang , Xinrong Hu , Jiazhe Miao , Ping Zhu , Lijun Li , Feng Yu , Minghua Jiang

The goal of digital dress body animation is to produce the most realistic dress body animation possible. Although a method based on the same topology as the body can produce realistic results, it can only be applied to garments with the same topology as the body. Although the generalization-based approach can be extended to different types of garment templates, it still produces effects far from reality. We propose GSNet, a learning-based model that generates realistic garment animations and applies to garment types that do not match the body topology. We encode garment templates and body motions into latent space and use graph convolution to transfer body motion information to garment templates to drive garment motions. Our model considers temporal dependency and provides reliable physical constraints to make the generated animations more realistic. Qualitative and quantitative experiments show that our approach achieves state-of-the-art 3D garment animation performance.

数字服装身体动画的目标是产生最逼真的服装身体动画。尽管基于与人体相同拓扑结构的方法可以产生逼真的结果,但它只能应用于与人体具有相同拓扑结构的服装。虽然基于泛化的方法可以扩展到不同类型的服装模板,但其效果与现实相差甚远。我们提出了GSNet,这是一个基于学习的模型,可以生成逼真的服装动画,并适用于与身体拓扑不匹配的服装类型。我们将服装模板和肢体动作编码到潜在空间中,利用图卷积将肢体动作信息传递到服装模板中,驱动服装运动。我们的模型考虑了时间依赖性,并提供了可靠的物理约束,使生成的动画更加逼真。定性和定量实验表明,我们的方法达到了最先进的3D服装动画性能。
{"title":"GSNet: Generating 3D garment animation via graph skinning network","authors":"Tao Peng ,&nbsp;Jiewen Kuang ,&nbsp;Jinxing Liang ,&nbsp;Xinrong Hu ,&nbsp;Jiazhe Miao ,&nbsp;Ping Zhu ,&nbsp;Lijun Li ,&nbsp;Feng Yu ,&nbsp;Minghua Jiang","doi":"10.1016/j.gmod.2023.101197","DOIUrl":"10.1016/j.gmod.2023.101197","url":null,"abstract":"<div><p>The goal of digital dress body animation is to produce the most realistic dress body animation possible. Although a method based on the same topology as the body can produce realistic results, it can only be applied to garments with the same topology as the body. Although the generalization-based approach can be extended to different types of garment templates, it still produces effects far from reality. We propose GSNet, a learning-based model that generates realistic garment animations and applies to garment types that do not match the body topology. We encode garment templates and body motions into latent space and use graph convolution to transfer body motion information to garment templates to drive garment motions. Our model considers temporal dependency and provides reliable physical constraints to make the generated animations more realistic. Qualitative and quantitative experiments show that our approach achieves state-of-the-art 3D garment animation performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101197"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47857326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GBGVD: Growth-based geodesic Voronoi diagrams GBGVD:基于生长的测地线Voronoi图
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101196
Yunjia Qi , Chen Zong , Yunxiao Zhang , Shuangmin Chen , Minfeng Xu , Lingqiang Ran , Jian Xu , Shiqing Xin , Ying He

Given a set of generators, the geodesic Voronoi diagram (GVD) defines how the base surface is decomposed into separate regions such that each generator dominates a region in terms of geodesic distance to the generators. Generally speaking, each ordinary bisector point of the GVD is determined by two adjacent generators while each branching point of the GVD is given by at least three generators. When there are sufficiently many generators, straight-line distance serves as an effective alternative of geodesic distance for computing GVDs. However, for a set of sparse generators, one has to use exact or approximate geodesic distance instead, which requires a high computational cost to trace the bisectors and the branching points. We observe that it is easier to infer the branching points by stretching the ordinary segments than competing between wavefronts from different directions. Based on the observation, we develop an unfolding technique to compute the ordinary points of the GVD, as well as a growth-based technique to stretch the traced bisector segments such that they finally grow into a complete GVD. Experimental results show that our algorithm runs 3 times as fast as the state-of-the-art method at the same accuracy level.

给定一组发电机,测地线Voronoi图(GVD)定义了如何将基面分解为单独的区域,使得每个发电机在与发电机的测地线距离方面占主导地位。一般来说,GVD的每个普通平分点由两个相邻的生成器确定,而GVD的每个分支点至少由三个生成器给出。当有足够多的发电机时,直线距离可以代替测地线距离有效地计算gvd。然而,对于一组稀疏生成器,必须使用精确或近似的测地线距离来代替,这需要很高的计算成本来跟踪平分线和分支点。我们观察到,通过拉伸普通段来推断分支点比在不同方向的波前之间竞争更容易。在此基础上,我们开发了一种展开技术来计算GVD的普通点,以及一种基于生长的技术来拉伸跟踪的等分线段,使它们最终生长成一个完整的GVD。实验结果表明,在相同的精度水平下,我们的算法运行速度是目前最先进方法的3倍。
{"title":"GBGVD: Growth-based geodesic Voronoi diagrams","authors":"Yunjia Qi ,&nbsp;Chen Zong ,&nbsp;Yunxiao Zhang ,&nbsp;Shuangmin Chen ,&nbsp;Minfeng Xu ,&nbsp;Lingqiang Ran ,&nbsp;Jian Xu ,&nbsp;Shiqing Xin ,&nbsp;Ying He","doi":"10.1016/j.gmod.2023.101196","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101196","url":null,"abstract":"<div><p>Given a set of generators, the geodesic Voronoi diagram (GVD) defines how the base surface is decomposed into separate regions such that each generator dominates a region in terms of geodesic distance to the generators. Generally speaking, each ordinary bisector point of the GVD is determined by two adjacent generators while each branching point of the GVD is given by at least three generators. When there are sufficiently many generators, straight-line distance serves as an effective alternative of geodesic distance for computing GVDs. However, for a set of sparse generators, one has to use exact or approximate geodesic distance instead, which requires a high computational cost to trace the bisectors and the branching points. We observe that it is easier to infer the branching points by stretching the ordinary segments than competing between wavefronts from different directions. Based on the observation, we develop an unfolding technique to compute the ordinary points of the GVD, as well as a growth-based technique to stretch the traced bisector segments such that they finally grow into a complete GVD. Experimental results show that our algorithm runs 3 times as fast as the state-of-the-art method at the same accuracy level.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101196"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49890151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PU-GAT: Point cloud upsampling with graph attention network PU-GAT:点云上采样与图关注网络
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-25 DOI: 10.1016/j.gmod.2023.101201
Xuan Deng, Cheng Zhang, Jian Shi, Zizhao Wu

Point cloud upsampling has been extensively studied, however, the existing approaches suffer from the losing of structural information due to neglect of spatial dependencies between points. In this work, we propose PU-GAT, a novel 3D point cloud upsampling method that leverages graph attention networks to learn structural information over the baselines. Specifically, we first design a local–global feature extraction unit by combining spatial information and position encoding to mine the local spatial inter-dependencies across point features. Then, we construct an up-down-up feature expansion unit, which uses graph attention and GCN to enhance the ability of capturing local structure information. Extensive experiments on synthetic and real data have shown that our method achieves superior performance against previous methods quantitatively and qualitatively.

点云上采样已经得到了广泛的研究,然而,现有的方法由于忽略了点之间的空间依赖关系而导致结构信息的丢失。在这项工作中,我们提出了一种新颖的三维点云上采样方法PU-GAT,它利用图注意力网络来学习基线上的结构信息。具体来说,我们首先设计了一个局部-全局特征提取单元,结合空间信息和位置编码来挖掘点特征之间的局部空间相互依赖关系。然后,构造了一个上下向上的特征扩展单元,利用图注意和GCN增强了局部结构信息的捕获能力。大量的合成数据和实际数据实验表明,该方法在定量和定性上都优于以往的方法。
{"title":"PU-GAT: Point cloud upsampling with graph attention network","authors":"Xuan Deng,&nbsp;Cheng Zhang,&nbsp;Jian Shi,&nbsp;Zizhao Wu","doi":"10.1016/j.gmod.2023.101201","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101201","url":null,"abstract":"<div><p>Point cloud upsampling has been extensively studied, however, the existing approaches suffer from the losing of structural information due to neglect of spatial dependencies between points. In this work, we propose PU-GAT, a novel 3D point cloud upsampling method that leverages graph attention networks to learn structural information over the baselines. Specifically, we first design a local–global feature extraction unit by combining spatial information and position encoding to mine the local spatial inter-dependencies across point features. Then, we construct an up-down-up feature expansion unit, which uses graph attention and GCN to enhance the ability of capturing local structure information. Extensive experiments on synthetic and real data have shown that our method achieves superior performance against previous methods quantitatively and qualitatively.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101201"},"PeriodicalIF":1.7,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1