首页 > 最新文献

Graphical Models最新文献

英文 中文
ImplicitPCA: Implicitly-proxied parametric encoding for collision-aware garment reconstruction 用于碰撞感知服装重构的隐式代理参数编码
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101195
Lan Chen , Jie Yang , Hongbo Fu , Xiaoxu Meng , Weikai Chen , Bo Yang , Lin Gao

The emerging remote collaboration in a virtual environment calls for quickly generating high-fidelity 3D humans with cloth from a single image. To estimate clothing geometry and topology, parametric models are widely used but often lack details. Alternative approaches based on implicit functions can generate accurate details but are limited to closed surfaces and may not produce physically correct reconstructions, such as collision-free human avatars. To solve these problems, we present ImplicitPCA, a framework for high-fidelity single-view garment reconstruction that bridges the good ends of explicit and implicit representations. The key is a parametric SDF network that closely couples parametric encoding with implicit functions and thus enjoys the fine details brought by implicit reconstruction while maintaining correct topology with open surfaces. We further introduce a collision-aware regression network to ensure the physical correctness of cloth and human. During inference, an iterative routine is applied to an input image with 2D garment landmarks to obtain optimal parameters by aligning the cloth mesh projection with the 2D landmarks and fitting the parametric implicit fields with the reconstructed cloth SDF. The experiments on the public dataset and in-the-wild images demonstrate that our result outperforms the prior works, reconstructing detailed, topology-correct 3D garments while avoiding garment-body collisions.

虚拟环境中新兴的远程协作要求从单个图像中快速生成高保真的3D人体。为了估计服装的几何结构和拓扑结构,参数模型被广泛使用,但往往缺乏细节。基于隐式函数的替代方法可以生成准确的细节,但仅限于闭合表面,并且可能无法生成物理上正确的重建,例如无碰撞的人类化身。为了解决这些问题,我们提出了ImplicitPCA,这是一个用于高保真单视图服装重建的框架,它连接了显式和隐式表示的良好效果。关键是一个参数SDF网络,它将参数编码与隐式函数紧密耦合,从而在保持具有开放曲面的正确拓扑的同时,享受隐式重构带来的精细细节。我们进一步引入了一个碰撞感知回归网络,以确保布料和人的物理正确性。在推断过程中,迭代程序被应用于具有2D服装标志的输入图像,以通过将布网格投影与2D标志对齐并将参数隐式场与重建的布SDF拟合来获得最佳参数。在公共数据集和野生图像上的实验表明,我们的结果优于先前的工作,重建了详细的、拓扑正确的3D服装,同时避免了服装与身体的碰撞。
{"title":"ImplicitPCA: Implicitly-proxied parametric encoding for collision-aware garment reconstruction","authors":"Lan Chen ,&nbsp;Jie Yang ,&nbsp;Hongbo Fu ,&nbsp;Xiaoxu Meng ,&nbsp;Weikai Chen ,&nbsp;Bo Yang ,&nbsp;Lin Gao","doi":"10.1016/j.gmod.2023.101195","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101195","url":null,"abstract":"<div><p>The emerging remote collaboration in a virtual environment calls for quickly generating high-fidelity 3D humans with cloth from a single image. To estimate clothing geometry and topology, parametric models are widely used but often lack details. Alternative approaches based on implicit functions can generate accurate details but are limited to closed surfaces and may not produce physically correct reconstructions, such as collision-free human avatars. To solve these problems, we present <em>ImplicitPCA</em>, a framework for high-fidelity single-view garment reconstruction that bridges the good ends of explicit and implicit representations. The key is a parametric SDF network that closely couples parametric encoding with implicit functions and thus enjoys the fine details brought by implicit reconstruction while maintaining correct topology with open surfaces. We further introduce a collision-aware regression network to ensure the physical correctness of cloth and human. During inference, an iterative routine is applied to an input image with 2D garment landmarks to obtain optimal parameters by aligning the cloth mesh projection with the 2D landmarks and fitting the parametric implicit fields with the reconstructed cloth SDF. The experiments on the public dataset and in-the-wild images demonstrate that our result outperforms the prior works, reconstructing detailed, topology-correct 3D garments while avoiding garment-body collisions.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101195"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representing uncertainty through sentiment and stance visualizations: A survey 通过情绪和立场可视化来表示不确定性:一项调查
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101191
Bárbara Ramalho, Joaquim Jorge, Sandra Gama

Visual analytics combines automated analysis techniques with interactive visualizations for effective understanding, reasoning, and decision-making on complex data. However, accurately classifying sentiments and stances in sentiment analysis remains challenging due to ambiguity and individual differences. This survey examines 35 papers published between 2016 and 2022, identifying unaddressed sources of friction that contribute to a gap between individual sentiment, processed data, and visual representation. We explore the impact of visualizations on data perception, analyze existing techniques, and investigate the many facets of uncertainty in sentiment and stance visualizations. We also discuss the evaluation methods used and present opportunities for future research. Our work addresses a gap in previous surveys by focusing on uncertainty and the visualization of sentiment and stance, providing valuable insights for researchers in graphical models, computational methods, and information visualization.

可视化分析将自动分析技术与交互式可视化相结合,以便对复杂数据进行有效的理解、推理和决策。然而,由于歧义和个体差异,在情感分析中准确分类情感和立场仍然具有挑战性。本调查分析了2016年至2022年间发表的35篇论文,找出了导致个人情绪、处理过的数据和视觉表现之间存在差距的未解决的摩擦来源。我们探讨了可视化对数据感知的影响,分析了现有的技术,并研究了情绪和立场可视化中不确定性的许多方面。我们还讨论了所使用的评估方法,并提出了未来研究的机会。我们的工作通过关注不确定性和情绪和立场的可视化解决了以前调查的空白,为图形模型、计算方法和信息可视化方面的研究人员提供了有价值的见解。
{"title":"Representing uncertainty through sentiment and stance visualizations: A survey","authors":"Bárbara Ramalho,&nbsp;Joaquim Jorge,&nbsp;Sandra Gama","doi":"10.1016/j.gmod.2023.101191","DOIUrl":"10.1016/j.gmod.2023.101191","url":null,"abstract":"<div><p>Visual analytics combines automated analysis techniques with interactive visualizations for effective understanding, reasoning, and decision-making on complex data. However, accurately classifying sentiments and stances in sentiment analysis remains challenging due to ambiguity and individual differences. This survey examines 35 papers published between 2016 and 2022, identifying unaddressed sources of friction that contribute to a gap between individual sentiment, processed data, and visual representation. We explore the impact of visualizations on data perception, analyze existing techniques, and investigate the many facets of uncertainty in sentiment and stance visualizations. We also discuss the evaluation methods used and present opportunities for future research. Our work addresses a gap in previous surveys by focusing on uncertainty and the visualization of sentiment and stance, providing valuable insights for researchers in graphical models, computational methods, and information visualization.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101191"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47783325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GIM3D plus: A labeled 3D dataset to design data-driven solutions for dressed humans GIM3D plus:一个标记的3D数据集,为穿着的人设计数据驱动的解决方案
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101187
Pietro Musoni , Simone Melzi , Umberto Castellani

Segmentation and classification of clothes in real 3D data are particularly challenging due to the extreme variation of their shapes, even among the same cloth category, induced by the underlying human subject. Several data-driven methods try to cope with this problem. Still, they must face the lack of available data to generalize to various real-world instances. For this reason, we present GIM3D plus (Garments In Motion 3D plus), a synthetic dataset of clothed 3D human characters in different poses. A physical simulation of clothes generates the over 5000 3D models in this dataset with different fabrics, sizes, and tightness, using animated human avatars representing different subjects in diverse poses. Our dataset comprises single meshes created to simulate 3D scans, with labels for the separate clothes and the visible body parts. We also provide an evaluation of the use of GIM3D plus as a training set on garment segmentation and classification tasks using state-of-the-art data-driven methods for both meshes and point clouds.

在真实的3D数据中对衣服进行分割和分类尤其具有挑战性,因为即使在同一布料类别中,它们的形状也会因潜在的人体主体而发生极端变化。有几种数据驱动的方法试图解决这个问题。然而,他们必须面对缺乏可用数据来归纳各种现实世界实例的问题。出于这个原因,我们提出了GIM3D plus(运动中的服装3D plus),这是一个不同姿势的服装3D人类角色的合成数据集。服装的物理模拟在这个数据集中生成了5000多个具有不同面料、尺寸和松紧度的3D模型,使用动画的人类化身代表不同的主体,以不同的姿势。我们的数据集包括模拟3D扫描的单个网格,并为单独的衣服和可见的身体部位贴上标签。我们还提供了使用GIM3D plus作为服装分割和分类任务的训练集的评估,使用最先进的数据驱动方法用于网格和点云。
{"title":"GIM3D plus: A labeled 3D dataset to design data-driven solutions for dressed humans","authors":"Pietro Musoni ,&nbsp;Simone Melzi ,&nbsp;Umberto Castellani","doi":"10.1016/j.gmod.2023.101187","DOIUrl":"10.1016/j.gmod.2023.101187","url":null,"abstract":"<div><p>Segmentation and classification of clothes in real 3D data are particularly challenging due to the extreme variation of their shapes, even among the same cloth category, induced by the underlying human subject. Several data-driven methods try to cope with this problem. Still, they must face the lack of available data to generalize to various real-world instances. For this reason, we present GIM3D plus (Garments In Motion 3D plus), a synthetic dataset of clothed 3D human characters in different poses. A physical simulation of clothes generates the over 5000 3D models in this dataset with different fabrics, sizes, and tightness, using animated human avatars representing different subjects in diverse poses. Our dataset comprises single meshes created to simulate 3D scans, with labels for the separate clothes and the visible body parts. We also provide an evaluation of the use of GIM3D plus as a training set on garment segmentation and classification tasks using state-of-the-art data-driven methods for both meshes and point clouds.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101187"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45694870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural style transfer for 3D meshes 3D网格的神经风格转移
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101198
Hongyuan Kang , Xiao Dong , Juan Cao , Zhonggui Chen

Style transfer is a popular research topic in the field of computer vision. In 3D stylization, a mesh model is deformed to achieve a specific geometric style. We explore a general neural style transfer framework for 3D meshes that can transfer multiple geometric styles from other meshes to the current mesh. Our stylization network is based on a pre-trained MeshNet model, from which content representation and Gram-based style representation are extracted. By constraining the similarity in content and style representation between the generated mesh and two different meshes, our network can generate a deformed mesh with a specific style while maintaining the content of the original mesh. Experiments verify the robustness of the proposed network and show the effectiveness of stylizing multiple models with one dedicated style mesh. We also conduct ablation experiments to analyze the effectiveness of our network.

风格转移是计算机视觉领域的一个热门研究课题。在三维样式化中,网格模型会变形以实现特定的几何样式。我们探索了一种用于3D网格的通用神经样式传递框架,该框架可以将多种几何样式从其他网格传递到当前网格。我们的风格化网络基于预先训练的MeshNet模型,从中提取内容表示和基于Gram的风格表示。通过约束生成的网格和两个不同网格之间内容和样式表示的相似性,我们的网络可以生成具有特定样式的变形网格,同时保持原始网格的内容。实验验证了所提出的网络的稳健性,并表明了用一个专用样式网格对多个模型进行样式化的有效性。我们还进行了消融实验来分析我们网络的有效性。
{"title":"Neural style transfer for 3D meshes","authors":"Hongyuan Kang ,&nbsp;Xiao Dong ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2023.101198","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101198","url":null,"abstract":"<div><p>Style transfer is a popular research topic in the field of computer vision. In 3D stylization, a mesh model is deformed to achieve a specific geometric style. We explore a general neural style transfer framework for 3D meshes that can transfer multiple geometric styles from other meshes to the current mesh. Our stylization network is based on a pre-trained MeshNet model, from which content representation and Gram-based style representation are extracted. By constraining the similarity in content and style representation between the generated mesh and two different meshes, our network can generate a deformed mesh with a specific style while maintaining the content of the original mesh. Experiments verify the robustness of the proposed network and show the effectiveness of stylizing multiple models with one dedicated style mesh. We also conduct ablation experiments to analyze the effectiveness of our network.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101198"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GSNet: Generating 3D garment animation via graph skinning network GSNet:通过图形蒙皮网络生成3D服装动画
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101197
Tao Peng , Jiewen Kuang , Jinxing Liang , Xinrong Hu , Jiazhe Miao , Ping Zhu , Lijun Li , Feng Yu , Minghua Jiang

The goal of digital dress body animation is to produce the most realistic dress body animation possible. Although a method based on the same topology as the body can produce realistic results, it can only be applied to garments with the same topology as the body. Although the generalization-based approach can be extended to different types of garment templates, it still produces effects far from reality. We propose GSNet, a learning-based model that generates realistic garment animations and applies to garment types that do not match the body topology. We encode garment templates and body motions into latent space and use graph convolution to transfer body motion information to garment templates to drive garment motions. Our model considers temporal dependency and provides reliable physical constraints to make the generated animations more realistic. Qualitative and quantitative experiments show that our approach achieves state-of-the-art 3D garment animation performance.

数字服装身体动画的目标是产生最逼真的服装身体动画。尽管基于与人体相同拓扑结构的方法可以产生逼真的结果,但它只能应用于与人体具有相同拓扑结构的服装。虽然基于泛化的方法可以扩展到不同类型的服装模板,但其效果与现实相差甚远。我们提出了GSNet,这是一个基于学习的模型,可以生成逼真的服装动画,并适用于与身体拓扑不匹配的服装类型。我们将服装模板和肢体动作编码到潜在空间中,利用图卷积将肢体动作信息传递到服装模板中,驱动服装运动。我们的模型考虑了时间依赖性,并提供了可靠的物理约束,使生成的动画更加逼真。定性和定量实验表明,我们的方法达到了最先进的3D服装动画性能。
{"title":"GSNet: Generating 3D garment animation via graph skinning network","authors":"Tao Peng ,&nbsp;Jiewen Kuang ,&nbsp;Jinxing Liang ,&nbsp;Xinrong Hu ,&nbsp;Jiazhe Miao ,&nbsp;Ping Zhu ,&nbsp;Lijun Li ,&nbsp;Feng Yu ,&nbsp;Minghua Jiang","doi":"10.1016/j.gmod.2023.101197","DOIUrl":"10.1016/j.gmod.2023.101197","url":null,"abstract":"<div><p>The goal of digital dress body animation is to produce the most realistic dress body animation possible. Although a method based on the same topology as the body can produce realistic results, it can only be applied to garments with the same topology as the body. Although the generalization-based approach can be extended to different types of garment templates, it still produces effects far from reality. We propose GSNet, a learning-based model that generates realistic garment animations and applies to garment types that do not match the body topology. We encode garment templates and body motions into latent space and use graph convolution to transfer body motion information to garment templates to drive garment motions. Our model considers temporal dependency and provides reliable physical constraints to make the generated animations more realistic. Qualitative and quantitative experiments show that our approach achieves state-of-the-art 3D garment animation performance.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101197"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47857326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GBGVD: Growth-based geodesic Voronoi diagrams GBGVD:基于生长的测地线Voronoi图
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-10-01 DOI: 10.1016/j.gmod.2023.101196
Yunjia Qi , Chen Zong , Yunxiao Zhang , Shuangmin Chen , Minfeng Xu , Lingqiang Ran , Jian Xu , Shiqing Xin , Ying He

Given a set of generators, the geodesic Voronoi diagram (GVD) defines how the base surface is decomposed into separate regions such that each generator dominates a region in terms of geodesic distance to the generators. Generally speaking, each ordinary bisector point of the GVD is determined by two adjacent generators while each branching point of the GVD is given by at least three generators. When there are sufficiently many generators, straight-line distance serves as an effective alternative of geodesic distance for computing GVDs. However, for a set of sparse generators, one has to use exact or approximate geodesic distance instead, which requires a high computational cost to trace the bisectors and the branching points. We observe that it is easier to infer the branching points by stretching the ordinary segments than competing between wavefronts from different directions. Based on the observation, we develop an unfolding technique to compute the ordinary points of the GVD, as well as a growth-based technique to stretch the traced bisector segments such that they finally grow into a complete GVD. Experimental results show that our algorithm runs 3 times as fast as the state-of-the-art method at the same accuracy level.

给定一组发电机,测地线Voronoi图(GVD)定义了如何将基面分解为单独的区域,使得每个发电机在与发电机的测地线距离方面占主导地位。一般来说,GVD的每个普通平分点由两个相邻的生成器确定,而GVD的每个分支点至少由三个生成器给出。当有足够多的发电机时,直线距离可以代替测地线距离有效地计算gvd。然而,对于一组稀疏生成器,必须使用精确或近似的测地线距离来代替,这需要很高的计算成本来跟踪平分线和分支点。我们观察到,通过拉伸普通段来推断分支点比在不同方向的波前之间竞争更容易。在此基础上,我们开发了一种展开技术来计算GVD的普通点,以及一种基于生长的技术来拉伸跟踪的等分线段,使它们最终生长成一个完整的GVD。实验结果表明,在相同的精度水平下,我们的算法运行速度是目前最先进方法的3倍。
{"title":"GBGVD: Growth-based geodesic Voronoi diagrams","authors":"Yunjia Qi ,&nbsp;Chen Zong ,&nbsp;Yunxiao Zhang ,&nbsp;Shuangmin Chen ,&nbsp;Minfeng Xu ,&nbsp;Lingqiang Ran ,&nbsp;Jian Xu ,&nbsp;Shiqing Xin ,&nbsp;Ying He","doi":"10.1016/j.gmod.2023.101196","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101196","url":null,"abstract":"<div><p>Given a set of generators, the geodesic Voronoi diagram (GVD) defines how the base surface is decomposed into separate regions such that each generator dominates a region in terms of geodesic distance to the generators. Generally speaking, each ordinary bisector point of the GVD is determined by two adjacent generators while each branching point of the GVD is given by at least three generators. When there are sufficiently many generators, straight-line distance serves as an effective alternative of geodesic distance for computing GVDs. However, for a set of sparse generators, one has to use exact or approximate geodesic distance instead, which requires a high computational cost to trace the bisectors and the branching points. We observe that it is easier to infer the branching points by stretching the ordinary segments than competing between wavefronts from different directions. Based on the observation, we develop an unfolding technique to compute the ordinary points of the GVD, as well as a growth-based technique to stretch the traced bisector segments such that they finally grow into a complete GVD. Experimental results show that our algorithm runs 3 times as fast as the state-of-the-art method at the same accuracy level.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"129 ","pages":"Article 101196"},"PeriodicalIF":1.7,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49890151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PU-GAT: Point cloud upsampling with graph attention network PU-GAT:点云上采样与图关注网络
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-09-25 DOI: 10.1016/j.gmod.2023.101201
Xuan Deng, Cheng Zhang, Jian Shi, Zizhao Wu

Point cloud upsampling has been extensively studied, however, the existing approaches suffer from the losing of structural information due to neglect of spatial dependencies between points. In this work, we propose PU-GAT, a novel 3D point cloud upsampling method that leverages graph attention networks to learn structural information over the baselines. Specifically, we first design a local–global feature extraction unit by combining spatial information and position encoding to mine the local spatial inter-dependencies across point features. Then, we construct an up-down-up feature expansion unit, which uses graph attention and GCN to enhance the ability of capturing local structure information. Extensive experiments on synthetic and real data have shown that our method achieves superior performance against previous methods quantitatively and qualitatively.

点云上采样已经得到了广泛的研究,然而,现有的方法由于忽略了点之间的空间依赖关系而导致结构信息的丢失。在这项工作中,我们提出了一种新颖的三维点云上采样方法PU-GAT,它利用图注意力网络来学习基线上的结构信息。具体来说,我们首先设计了一个局部-全局特征提取单元,结合空间信息和位置编码来挖掘点特征之间的局部空间相互依赖关系。然后,构造了一个上下向上的特征扩展单元,利用图注意和GCN增强了局部结构信息的捕获能力。大量的合成数据和实际数据实验表明,该方法在定量和定性上都优于以往的方法。
{"title":"PU-GAT: Point cloud upsampling with graph attention network","authors":"Xuan Deng,&nbsp;Cheng Zhang,&nbsp;Jian Shi,&nbsp;Zizhao Wu","doi":"10.1016/j.gmod.2023.101201","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101201","url":null,"abstract":"<div><p>Point cloud upsampling has been extensively studied, however, the existing approaches suffer from the losing of structural information due to neglect of spatial dependencies between points. In this work, we propose PU-GAT, a novel 3D point cloud upsampling method that leverages graph attention networks to learn structural information over the baselines. Specifically, we first design a local–global feature extraction unit by combining spatial information and position encoding to mine the local spatial inter-dependencies across point features. Then, we construct an up-down-up feature expansion unit, which uses graph attention and GCN to enhance the ability of capturing local structure information. Extensive experiments on synthetic and real data have shown that our method achieves superior performance against previous methods quantitatively and qualitatively.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"130 ","pages":"Article 101201"},"PeriodicalIF":1.7,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49889743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-step techniques for accurate selection of small elements in VR environments 在VR环境中精确选择小元素的两步技术
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101183
Elena Molina, Pere-Pau Vázquez

One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.

3D环境中的关键互动之一是目标获取,当目标很小或在杂乱的场景中时,这可能是一个挑战。在这里,可能会选择不正确的元素,导致挫败感和浪费时间。选择本身的物理行为(通常包括按下按钮)进一步阻碍了准确性。这个动作降低了稳定性,增加了错误捕获目标的可能性。我们专注于分子可视化和选择原子的挑战,呈现为小球体。我们提出了两种技术,改进了以前的渐进式选择技术。它们有助于在初始选择后获取邻居,与使用经典的基于光线的选择相比,提供更舒适的体验,特别是对于闭塞的元素。我们进行了一项试点研究,随后进行了两次正式的用户研究。结果表明,我们的方法得到了与会者的高度赞赏。这些技术也适用于其他拥挤的环境。
{"title":"Two-step techniques for accurate selection of small elements in VR environments","authors":"Elena Molina,&nbsp;Pere-Pau Vázquez","doi":"10.1016/j.gmod.2023.101183","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101183","url":null,"abstract":"<div><p>One of the key interactions in 3D environments is target acquisition, which can be challenging when targets are small or in cluttered scenes. Here, incorrect elements may be selected, leading to frustration and wasted time. The accuracy is further hindered by the physical act of selection itself, typically involving pressing a button. This action reduces stability, increasing the likelihood of erroneous target acquisition. We focused on molecular visualization and on the challenge of selecting atoms, rendered as small spheres. We present two techniques that improve upon previous progressive selection techniques. They facilitate the acquisition of neighbors after an initial selection, providing a more comfortable experience compared to using classical ray-based selection, particularly with occluded elements. We conducted a pilot study followed by two formal user studies. The results indicated that our approaches were highly appreciated by the participants. These techniques could be suitable for other crowded environments as well.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101183"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient collision detection using hybrid medial axis transform and BVH for rigid body simulation 基于混合中轴变换和BVH的刚体仿真高效碰撞检测
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101180
Xingxin Li, Shibo Song, Junfeng Yao, Hanyin Zhang, Rongzhou Zhou, Qingqi Hong

Medial Axis Transform (MAT) has been recently adopted as the acceleration structure of broad-phase collision detection. Compared to traditional BVH-based methods, MAT can provide a high-fidelity volumetric approximation of 3D complex objects, resulting in higher collision culling efficiency. However, due to MAT’s non-hierarchical structure, it may be outperformed in collision-light scenarios because several cullings at the top level of a BVH may take a large number of cullings with MAT. We propose a collision detection method that combines MAT and BVH to address the above problem. Our technique efficiently culls collisions between dynamic and static objects. Experimental results show that our method has higher culling efficiency than pure BVH or MAT methods.

中轴变换(MAT)作为宽相位碰撞检测的加速结构,近年来得到了广泛的应用。与传统的基于bvh的方法相比,MAT可以提供高保真的三维复杂物体的体积近似,从而提高碰撞剔除效率。然而,由于MAT的非分层结构,由于在BVH的顶层进行多次剔除可能需要使用MAT进行大量剔除,因此它在轻碰撞场景中的表现可能会更好。我们提出了一种结合MAT和BVH的碰撞检测方法来解决上述问题。我们的技术有效地剔除了动态和静态对象之间的碰撞。实验结果表明,该方法比纯BVH或MAT方法具有更高的剔除效率。
{"title":"Efficient collision detection using hybrid medial axis transform and BVH for rigid body simulation","authors":"Xingxin Li,&nbsp;Shibo Song,&nbsp;Junfeng Yao,&nbsp;Hanyin Zhang,&nbsp;Rongzhou Zhou,&nbsp;Qingqi Hong","doi":"10.1016/j.gmod.2023.101180","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101180","url":null,"abstract":"<div><p>Medial Axis Transform (MAT) has been recently adopted as the acceleration structure of broad-phase collision detection. Compared to traditional BVH-based methods, MAT can provide a high-fidelity volumetric approximation of 3D complex objects, resulting in higher collision culling efficiency. However, due to MAT’s non-hierarchical structure, it may be outperformed in collision-light scenarios because several cullings at the top level of a BVH may take a large number of cullings with MAT. We propose a collision detection method that combines MAT and BVH to address the above problem. Our technique efficiently culls collisions between dynamic and static objects. Experimental results show that our method has higher culling efficiency than pure BVH or MAT methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101180"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust workflow for b-rep generation from image masks 从图像蒙版生成b-rep的健壮工作流
IF 1.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-07-01 DOI: 10.1016/j.gmod.2023.101174
Omar M. Hafez, Mark M. Rashid

A novel approach to generating watertight, manifold boundary representations from noisy binary image masks of MRI or CT scans is presented. The method samples an input segmented image and locally approximates the material boundary. Geometric error metrics between the voxelated boundary and an approximating template surface are minimized, and boundary point/normals are correspondingly generated. Voronoi partitioning is employed to perform surface reconstruction on the resulting oriented point cloud. The method performs competitively against other approaches, both in comparisons of shape and volume error metrics to a canonical image mask, and in qualitative comparisons using noisy image masks from real scans. The framework readily admits enhancements for capturing sharp edges and corners. The approach robustly produces high-quality b-reps that may be inserted into an image-based meshing pipeline for purposes of physics-based simulation.

提出了一种从MRI或CT扫描的噪声二值图像蒙版生成水密的流形边界表示的新方法。该方法对输入的分割图像进行采样并局部逼近材料边界。体素化边界与近似模板表面之间的几何误差指标被最小化,并相应生成边界点/法线。采用Voronoi分割对得到的定向点云进行表面重构。该方法在形状和体积误差指标与标准图像掩模的比较以及使用真实扫描的噪声图像掩模进行定性比较方面,与其他方法相比具有竞争力。该框架很容易接受捕捉尖锐边缘和角落的增强功能。该方法健壮地产生高质量的b-rep,可以插入到基于图像的网格管道中,用于基于物理的仿真。
{"title":"A robust workflow for b-rep generation from image masks","authors":"Omar M. Hafez,&nbsp;Mark M. Rashid","doi":"10.1016/j.gmod.2023.101174","DOIUrl":"https://doi.org/10.1016/j.gmod.2023.101174","url":null,"abstract":"<div><p>A novel approach to generating watertight, manifold boundary representations from noisy binary image masks of MRI or CT scans is presented. The method samples an input segmented image and locally approximates the material boundary. Geometric error metrics between the voxelated boundary and an approximating template surface are minimized, and boundary point/normals are correspondingly generated. Voronoi partitioning is employed to perform surface reconstruction on the resulting oriented point cloud. The method performs competitively against other approaches, both in comparisons of shape and volume error metrics to a canonical image mask, and in qualitative comparisons using noisy image masks from real scans. The framework readily admits enhancements for capturing sharp edges and corners. The approach robustly produces high-quality b-reps that may be inserted into an image-based meshing pipeline for purposes of physics-based simulation.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"128 ","pages":"Article 101174"},"PeriodicalIF":1.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49875409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1