首页 > 最新文献

Graphical Models最新文献

英文 中文
GarTemFormer: Temporal transformer-based for optimizing virtual garment animation GarTemFormer:基于时间变换器的虚拟服装动画优化工具
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-11 DOI: 10.1016/j.gmod.2024.101235
Jiazhe Miao , Tao Peng , Fei Fang , Xinrong Hu , Li Li
Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations.
虚拟服装动画和变形是计算机图形学的一个重要研究方向,在计算机游戏、动画和电影等领域有着广泛的应用。传统的物理方法可以模拟服装的物理特性,如弹性和重力,从而产生逼真的变形效果。然而,此类方法的计算复杂性阻碍了动画的实时生成。另一方面,数据驱动方法可以从现有的服装变形数据中学习,从而快速生成动画。然而,使用这种方法生成的动画往往缺乏真实感,难以捕捉服装行为的细微变化。我们提出了一种兼顾逼真度和速度的方法,通过考虑空间和时间维度,我们利用真实世界的视频来捕捉人体运动和服装变形,从而制作出更逼真的动画效果。我们通过对齐输入特征和批量计算每个空间位置的时空注意力来解决时空注意力的复杂性。在服装变形方面,我们采用服装分割技术从视频中提取服装模板。随后,利用我们设计的基于变换器的时空框架,我们捕捉了服装变形与人体形状特征之间的相关性以及帧级依赖性。此外,我们还利用特征融合策略来合并形状和运动特征,通过后处理来解决服装与人体之间的穿透问题,从而生成无碰撞的服装变形序列。定性和定量实验证明了我们的方法优于现有方法,能有效生成时间上一致且逼真的动态服装变形。
{"title":"GarTemFormer: Temporal transformer-based for optimizing virtual garment animation","authors":"Jiazhe Miao ,&nbsp;Tao Peng ,&nbsp;Fei Fang ,&nbsp;Xinrong Hu ,&nbsp;Li Li","doi":"10.1016/j.gmod.2024.101235","DOIUrl":"10.1016/j.gmod.2024.101235","url":null,"abstract":"<div><div>Virtual garment animation and deformation constitute a pivotal research direction in computer graphics, finding extensive applications in domains such as computer games, animation, and film. Traditional physics-based methods can simulate the physical characteristics of garments, such as elasticity and gravity, to generate realistic deformation effects. However, the computational complexity of such methods hinders real-time animation generation. Data-driven approaches, on the other hand, learn from existing garment deformation data, enabling rapid animation generation. Nevertheless, animations produced using this approach often lack realism, struggling to capture subtle variations in garment behavior. We proposes an approach that balances realism and speed, by considering both spatial and temporal dimensions, we leverage real-world videos to capture human motion and garment deformation, thereby producing more realistic animation effects. We address the complexity of spatiotemporal attention by aligning input features and calculating spatiotemporal attention at each spatial position in a batch-wise manner. For garment deformation, garment segmentation techniques are employed to extract garment templates from videos. Subsequently, leveraging our designed Transformer-based temporal framework, we capture the correlation between garment deformation and human body shape features, as well as frame-level dependencies. Furthermore, we utilize a feature fusion strategy to merge shape and motion features, addressing penetration issues between clothing and the human body through post-processing, thus generating collision-free garment deformation sequences. Qualitative and quantitative experiments demonstrate the superiority of our approach over existing methods, efficiently producing temporally coherent and realistic dynamic garment deformations.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101235"},"PeriodicalIF":2.5,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building semantic segmentation from large-scale point clouds via primitive recognition 通过基元识别从大规模点云构建语义分割
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-10 DOI: 10.1016/j.gmod.2024.101234
Chiara Romanengo , Daniela Cabiddu , Simone Pittaluga, Michela Mortara
Modelling objects at a large resolution or scale brings challenges in the storage and processing of data and requires efficient structures. In the context of modelling urban environments, we face both issues: 3D data from acquisition extends at geographic scale, and digitization of buildings of historical value can be particularly dense. Therefore, it is crucial to exploit the point cloud derived from acquisition as much as possible, before (or alongside) deriving other representations (e.g., surface or volume meshes) for further needs (e.g., visualization, simulation). In this paper, we present our work in processing 3D data of urban areas towards the generation of a semantic model for a city digital twin. Specifically, we focus on the recognition of shape primitives (e.g., planes, cylinders, spheres) in point clouds representing urban scenes, with the main application being the semantic segmentation into walls, roofs, streets, domes, vaults, arches, and so on.
Here, we extend the conference contribution in Romanengo et al. (2023a), where we presented our preliminary results on single buildings. In this extended version, we generalize the approach to manage whole cities by preliminarily splitting the point cloud building-wise and streamlining the pipeline. We added a thorough experimentation with a benchmark dataset from the city of Tallinn (47,000 buildings), a portion of Vaihingen (170 building) and our case studies in Catania and Matera, Italy (4 high-resolution buildings). Results show that our approach successfully deals with point clouds of considerable size, either surveyed at high resolution or covering wide areas. In both cases, it proves robust to input noise and outliers but sensitive to uneven sampling density.
以大分辨率或大尺度对物体进行建模,会给数据的存储和处理带来挑战,并且需要高效的结构。在城市环境建模方面,我们面临着这两个问题:采集的三维数据以地理尺度进行扩展,而具有历史价值的建筑物的数字化可能特别密集。因此,在为进一步的需求(如可视化、模拟)导出其他表征(如表面或体积网格)之前(或同时),尽可能地利用从采集中获得的点云是至关重要的。在本文中,我们介绍了在处理城市区域三维数据以生成城市数字孪生语义模型方面所做的工作。具体来说,我们的重点是识别代表城市场景的点云中的形状基元(如平面、圆柱、球体),主要应用是将其语义分割为墙壁、屋顶、街道、圆顶、拱顶、拱门等。在这一扩展版本中,我们通过初步拆分建筑点云和简化管道,将该方法推广到管理整个城市。我们还利用塔林市的基准数据集(47,000 栋建筑)、瓦伊兴根市的部分数据集(170 栋建筑)以及意大利卡塔尼亚和马泰拉的案例研究(4 栋高分辨率建筑)进行了全面实验。结果表明,我们的方法成功地处理了相当规模的点云,无论是高分辨率勘测还是大面积覆盖。在这两种情况下,它对输入噪声和异常值都很稳健,但对不均匀的采样密度很敏感。
{"title":"Building semantic segmentation from large-scale point clouds via primitive recognition","authors":"Chiara Romanengo ,&nbsp;Daniela Cabiddu ,&nbsp;Simone Pittaluga,&nbsp;Michela Mortara","doi":"10.1016/j.gmod.2024.101234","DOIUrl":"10.1016/j.gmod.2024.101234","url":null,"abstract":"<div><div>Modelling objects at a large resolution or scale brings challenges in the storage and processing of data and requires efficient structures. In the context of modelling urban environments, we face both issues: 3D data from acquisition extends at geographic scale, and digitization of buildings of historical value can be particularly dense. Therefore, it is crucial to exploit the point cloud derived from acquisition as much as possible, before (or alongside) deriving other representations (e.g., surface or volume meshes) for further needs (e.g., visualization, simulation). In this paper, we present our work in processing 3D data of urban areas towards the generation of a semantic model for a city digital twin. Specifically, we focus on the recognition of shape primitives (e.g., planes, cylinders, spheres) in point clouds representing urban scenes, with the main application being the semantic segmentation into walls, roofs, streets, domes, vaults, arches, and so on.</div><div>Here, we extend the conference contribution in Romanengo et al. (2023a), where we presented our preliminary results on single buildings. In this extended version, we generalize the approach to manage whole cities by preliminarily splitting the point cloud building-wise and streamlining the pipeline. We added a thorough experimentation with a benchmark dataset from the city of Tallinn (47,000 buildings), a portion of Vaihingen (170 building) and our case studies in Catania and Matera, Italy (4 high-resolution buildings). Results show that our approach successfully deals with point clouds of considerable size, either surveyed at high resolution or covering wide areas. In both cases, it proves robust to input noise and outliers but sensitive to uneven sampling density.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101234"},"PeriodicalIF":2.5,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning-based point cloud completion methods: A review 基于深度学习的点云补全方法:综述
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-10-03 DOI: 10.1016/j.gmod.2024.101233
Kun Zhang , Ao Zhang , Xiaohong Wang , Weisong Li
Point cloud completion aims to utilize algorithms to repair missing parts in 3D data for high-quality point clouds. This technology is crucial for applications such as autonomous driving and urban planning. With deep learning’s progress, the robustness and accuracy of point cloud completion have improved significantly. However, the quality of completed point clouds requires further enhancement to satisfy practical requirements. In this study, we conducted an extensive survey of point cloud completion methods, with the following main objectives: (i) We classified point cloud completion methods into categories based on their principles, such as point-based, convolution-based, GAN-based, and geometry-based methods, and thoroughly investigated the advantages and limitations of each category. (ii) We collected publicly available datasets for point cloud completion algorithms and conducted experimental comparisons using various typical deep-learning networks to draw conclusions. (iii) With our research in this paper, we discuss future research trends in this rapidly evolving field.
点云补全旨在利用算法修复三维数据中的缺失部分,以获得高质量的点云。这项技术对于自动驾驶和城市规划等应用至关重要。随着深度学习的发展,点云补全的鲁棒性和准确性都有了显著提高。然而,完成点云的质量还需要进一步提高才能满足实际需求。在本研究中,我们对点云补全方法进行了广泛调查,主要目的如下:(i) 我们根据点云补全方法的原理将其分为几类,如基于点的方法、基于卷积的方法、基于 GAN 的方法和基于几何的方法,并深入研究了每一类方法的优势和局限性。(ii) 我们收集了公开的点云补全算法数据集,并使用各种典型的深度学习网络进行了实验比较,从而得出结论。(iii) 通过本文的研究,我们探讨了这一快速发展领域的未来研究趋势。
{"title":"Deep-learning-based point cloud completion methods: A review","authors":"Kun Zhang ,&nbsp;Ao Zhang ,&nbsp;Xiaohong Wang ,&nbsp;Weisong Li","doi":"10.1016/j.gmod.2024.101233","DOIUrl":"10.1016/j.gmod.2024.101233","url":null,"abstract":"<div><div>Point cloud completion aims to utilize algorithms to repair missing parts in 3D data for high-quality point clouds. This technology is crucial for applications such as autonomous driving and urban planning. With deep learning’s progress, the robustness and accuracy of point cloud completion have improved significantly. However, the quality of completed point clouds requires further enhancement to satisfy practical requirements. In this study, we conducted an extensive survey of point cloud completion methods, with the following main objectives: (i) We classified point cloud completion methods into categories based on their principles, such as point-based, convolution-based, GAN-based, and geometry-based methods, and thoroughly investigated the advantages and limitations of each category. (ii) We collected publicly available datasets for point cloud completion algorithms and conducted experimental comparisons using various typical deep-learning networks to draw conclusions. (iii) With our research in this paper, we discuss future research trends in this rapidly evolving field.</div></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101233"},"PeriodicalIF":2.5,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sketch-2-4D: Sketch driven dynamic 3D scene generation Sketch-2-4D:草图驱动动态 3D 场景生成
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-16 DOI: 10.1016/j.gmod.2024.101231
Guo-Wei Yang, Dong-Yu Chen, Tai-Jiang Mu

Sketch-based content generation offers flexible controllability, making it a promising narrative avenue in film production. Directors often visualize their imagination by crafting storyboards using sketches and textual descriptions for each shot. However, current video generation methods suffer from three-dimensional inconsistencies, with notably artifacts during large motion or camera pans around scenes. A suitable solution is to directly generate 4D scene, enabling consistent dynamic three-dimensional scenes generation. We define the Sketch-2-4D problem, aiming to enhance controllability and consistency in this context. We propose a novel Control Score Distillation Sampling (SDS-C) for sketch-based 4D scene generation, providing precise control over scene dynamics. We further design Spatial Consistency Modules and Temporal Consistency Modules to tackle the temporal and spatial inconsistencies introduced by sketch-based control, respectively. Extensive experiments have demonstrated the effectiveness of our approach.

基于草图的内容生成提供了灵活的可控性,使其成为电影制作中一个前景广阔的叙事途径。导演通常通过使用草图和文字描述为每个镜头制作故事板来实现想象的可视化。然而,目前的视频生成方法存在三维不一致的问题,特别是在场景大运动或镜头平移时会出现伪影。一个合适的解决方案是直接生成 4D 场景,实现一致的动态三维场景生成。我们定义了 "草图-2-4D "问题,旨在增强这种情况下的可控性和一致性。我们为基于草图的 4D 场景生成提出了一种新颖的控制分数蒸馏采样(SDS-C),可提供对场景动态的精确控制。我们进一步设计了空间一致性模块和时间一致性模块,以分别解决基于草图的控制所带来的时间和空间不一致性问题。广泛的实验证明了我们方法的有效性。
{"title":"Sketch-2-4D: Sketch driven dynamic 3D scene generation","authors":"Guo-Wei Yang,&nbsp;Dong-Yu Chen,&nbsp;Tai-Jiang Mu","doi":"10.1016/j.gmod.2024.101231","DOIUrl":"10.1016/j.gmod.2024.101231","url":null,"abstract":"<div><p>Sketch-based content generation offers flexible controllability, making it a promising narrative avenue in film production. Directors often visualize their imagination by crafting storyboards using sketches and textual descriptions for each shot. However, current video generation methods suffer from three-dimensional inconsistencies, with notably artifacts during large motion or camera pans around scenes. A suitable solution is to directly generate 4D scene, enabling consistent dynamic three-dimensional scenes generation. We define the Sketch-2-4D problem, aiming to enhance controllability and consistency in this context. We propose a novel Control Score Distillation Sampling (SDS-C) for sketch-based 4D scene generation, providing precise control over scene dynamics. We further design Spatial Consistency Modules and Temporal Consistency Modules to tackle the temporal and spatial inconsistencies introduced by sketch-based control, respectively. Extensive experiments have demonstrated the effectiveness of our approach.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101231"},"PeriodicalIF":2.5,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000195/pdfft?md5=12c973a601d5430e660ae4453ec0a4d8&pid=1-s2.0-S1524070324000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FACE: Feature-preserving CAD model surface reconstruction FACE:保留特征的 CAD 模型表面重建
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-12 DOI: 10.1016/j.gmod.2024.101230
Shuxian Cai , Yuanyan Ye , Juan Cao , Zhonggui Chen

Feature lines play a pivotal role in the reconstruction of CAD models. Currently, there is a lack of a robust explicit reconstruction algorithm capable of achieving sharp feature reconstruction in point clouds with noise and non-uniformity. In this paper, we propose a feature-preserving CAD model surface reconstruction algorithm, named FACE. The algorithm initiates with preprocessing the point cloud through denoising and resampling steps, resulting in a high-quality point cloud that is devoid of noise and uniformly distributed. Then, it employs discrete optimal transport to detect feature regions and subsequently generates dense points along potential feature lines to enhance features. Finally, the advancing-front surface reconstruction method, based on normal vector directions, is applied to reconstruct the enhanced point cloud. Extensive experiments demonstrate that, for contaminated point clouds, this algorithm excels not only in reconstructing straight edges and corner points but also in handling curved edges and surfaces, surpassing existing methods.

特征线在 CAD 模型的重建中起着举足轻重的作用。目前,还缺乏一种稳健的显式重建算法,能够在存在噪声和不均匀性的点云中实现清晰的特征重建。在本文中,我们提出了一种保留特征的 CAD 模型曲面重建算法,命名为 FACE。该算法首先通过去噪和重采样步骤对点云进行预处理,从而得到无噪声且分布均匀的高质量点云。然后,该算法采用离散优化传输来检测特征区域,随后沿潜在特征线生成密集点以增强特征。最后,应用基于法向量方向的前进前表面重建方法来重建增强点云。大量实验证明,对于受污染的点云,该算法不仅在重建直线边缘和角点方面表现出色,而且在处理曲线边缘和曲面方面也超越了现有方法。
{"title":"FACE: Feature-preserving CAD model surface reconstruction","authors":"Shuxian Cai ,&nbsp;Yuanyan Ye ,&nbsp;Juan Cao ,&nbsp;Zhonggui Chen","doi":"10.1016/j.gmod.2024.101230","DOIUrl":"10.1016/j.gmod.2024.101230","url":null,"abstract":"<div><p>Feature lines play a pivotal role in the reconstruction of CAD models. Currently, there is a lack of a robust explicit reconstruction algorithm capable of achieving sharp feature reconstruction in point clouds with noise and non-uniformity. In this paper, we propose a feature-preserving CAD model surface reconstruction algorithm, named FACE. The algorithm initiates with preprocessing the point cloud through denoising and resampling steps, resulting in a high-quality point cloud that is devoid of noise and uniformly distributed. Then, it employs discrete optimal transport to detect feature regions and subsequently generates dense points along potential feature lines to enhance features. Finally, the advancing-front surface reconstruction method, based on normal vector directions, is applied to reconstruct the enhanced point cloud. Extensive experiments demonstrate that, for contaminated point clouds, this algorithm excels not only in reconstructing straight edges and corner points but also in handling curved edges and surfaces, surpassing existing methods.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"136 ","pages":"Article 101230"},"PeriodicalIF":2.5,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000183/pdfft?md5=c92c397f0636a8c7097baed24a31ef77&pid=1-s2.0-S1524070324000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image vectorization using a sparse patch layout 使用稀疏补丁布局进行图像矢量化
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-05 DOI: 10.1016/j.gmod.2024.101229
K. He, J.B.T.M. Roerdink, J. Kosinka

Mesh-based image vectorization techniques have been studied for a long time, mostly owing to their compactness and flexibility in capturing image features. However, existing methods often lead to relatively dense meshes, especially when applied to images with high-frequency details or textures. We present a novel method that automatically vectorizes an image into a sparse collection of Coons patches whose size adapts to image features. To balance the number of patches and the accuracy of feature alignment, we generate the layout based on a harmonic cross field constrained by image features. We support T-junctions, which keeps the number of patches low and ensures local adaptation to feature density, naturally complemented by varying mesh-color resolution over the patches. Our experimental results demonstrate the utility, accuracy, and sparsity of our method.

基于网格的图像矢量化技术已被研究了很长时间,这主要归功于其在捕捉图像特征时的紧凑性和灵活性。然而,现有的方法通常会产生相对密集的网格,尤其是在应用于具有高频细节或纹理的图像时。我们提出了一种新方法,它能自动将图像矢量化为稀疏的 Coons 补丁集合,其大小可适应图像特征。为了平衡补丁的数量和特征对齐的准确性,我们根据受图像特征约束的谐波交叉场生成布局。我们支持 T 型连接,这样既能保持较低的补丁数量,又能确保局部适应特征密度,同时还能通过补丁上不同的网格颜色分辨率进行自然补充。我们的实验结果证明了我们方法的实用性、准确性和稀疏性。
{"title":"Image vectorization using a sparse patch layout","authors":"K. He,&nbsp;J.B.T.M. Roerdink,&nbsp;J. Kosinka","doi":"10.1016/j.gmod.2024.101229","DOIUrl":"10.1016/j.gmod.2024.101229","url":null,"abstract":"<div><p>Mesh-based image vectorization techniques have been studied for a long time, mostly owing to their compactness and flexibility in capturing image features. However, existing methods often lead to relatively dense meshes, especially when applied to images with high-frequency details or textures. We present a novel method that automatically vectorizes an image into a sparse collection of Coons patches whose size adapts to image features. To balance the number of patches and the accuracy of feature alignment, we generate the layout based on a harmonic cross field constrained by image features. We support T-junctions, which keeps the number of patches low and ensures local adaptation to feature density, naturally complemented by varying mesh-color resolution over the patches. Our experimental results demonstrate the utility, accuracy, and sparsity of our method.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101229"},"PeriodicalIF":2.5,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000171/pdfft?md5=68d700973ee613d865f875bbdad4d05d&pid=1-s2.0-S1524070324000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum to Image restoration for digital line drawings using line masks [Graphical Models 135 (2024) 101226] 使用线条掩码进行数字线条图的图像修复[图形模型 135 (2024) 101226] 更正
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-02 DOI: 10.1016/j.gmod.2024.101228
Yan Zhu, Yasushi Yamaguchi
{"title":"Corrigendum to Image restoration for digital line drawings using line masks [Graphical Models 135 (2024) 101226]","authors":"Yan Zhu,&nbsp;Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101228","DOIUrl":"10.1016/j.gmod.2024.101228","url":null,"abstract":"","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101228"},"PeriodicalIF":2.5,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S152407032400016X/pdfft?md5=c31a932ed00cc957b9680b9f31021df7&pid=1-s2.0-S152407032400016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image restoration for digital line drawings using line masks 使用线条遮罩修复数字线条图的图像
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-20 DOI: 10.1016/j.gmod.2024.101226
Yan Zhu, Yasushi Yamaguchi

The restoration of digital images holds practical significance due to the fact that degradation of digital image data on the internet is common. State-of-the-art image restoration methods usually employ end-to-end trained networks. However, we argue that a network trained with diverse image pairs is not optimal for restoring line drawings which have extensive plain backgrounds. We propose a line-drawing restoration framework which takes a restoration neural network as backbone and processes an input degraded line drawing in two steps. First, a proposed mask-predicting network predicts a line mask which indicates the possible location of foreground and background in the potential original line drawing. Next, we feed the degraded input line drawing together with the predicted line mask into the backbone restoration network. The traditional L1 loss for the backbone restoration network is substituted with a masked Mean Square Error (MSE) loss. We test our framework on two classical image restoration tasks: JPEG restoration and super-resolution, and experiments demonstrate that our framework can achieve better quantitative and visual results in most cases.

由于互联网上的数字图像数据普遍存在质量下降的问题,因此数字图像的修复具有重要的现实意义。最先进的图像修复方法通常采用端对端训练网络。然而,我们认为,用不同的图像对训练出的网络并不是修复线条图的最佳方法,因为线条图有大量的平淡背景。我们提出了一种线图修复框架,它以一个修复神经网络为骨干,分两步处理输入的退化线图。首先,一个拟议的掩码预测网络会预测一个线条掩码,该掩码会指示潜在原始线条图中前景和背景的可能位置。接下来,我们将退化的输入线条图与预测的线条掩码一起输入主干修复网络。主干修复网络的传统 L1 损失被掩码均方误差 (MSE) 损失所取代。我们在两个经典的图像复原任务中测试了我们的框架:实验证明,我们的框架在大多数情况下都能获得更好的定量和视觉效果。
{"title":"Image restoration for digital line drawings using line masks","authors":"Yan Zhu,&nbsp;Yasushi Yamaguchi","doi":"10.1016/j.gmod.2024.101226","DOIUrl":"10.1016/j.gmod.2024.101226","url":null,"abstract":"<div><p>The restoration of digital images holds practical significance due to the fact that degradation of digital image data on the internet is common. State-of-the-art image restoration methods usually employ end-to-end trained networks. However, we argue that a network trained with diverse image pairs is not optimal for restoring line drawings which have extensive plain backgrounds. We propose a line-drawing restoration framework which takes a restoration neural network as backbone and processes an input degraded line drawing in two steps. First, a proposed mask-predicting network predicts a line mask which indicates the possible location of foreground and background in the potential original line drawing. Next, we feed the degraded input line drawing together with the predicted line mask into the backbone restoration network. The traditional <span><math><msub><mrow><mi>L</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> loss for the backbone restoration network is substituted with a masked Mean Square Error (MSE) loss. We test our framework on two classical image restoration tasks: JPEG restoration and super-resolution, and experiments demonstrate that our framework can achieve better quantitative and visual results in most cases.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101226"},"PeriodicalIF":2.5,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000146/pdfft?md5=58619f9331f768a8dedffc9dc70f4dbb&pid=1-s2.0-S1524070324000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142012112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves 通过提取中心点和利用 NURBS 曲线重构自由形态弯曲部件的弯曲线
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-19 DOI: 10.1016/j.gmod.2024.101227
Lorenzo Scandola, Maximilian Erber, Philipp Hagenlocher, Florian Steinlehner, Wolfram Volk

Free-form bending belongs to the kinematics-based forming processes and allows the manufacturing of arbitrary 3D-bent components. To obtain the desired part, the tool kinematics is adjusted by comparing the target and obtained bending line. While the target geometry consists of parametric CAD data, the obtained geometry is a surface mesh, making the bending line extraction a challenging task. In this paper the reconstruction of the bending line for free-form bent components is presented. The strategy relies on the extraction of the centroids, for which a ray casting algorithm is developed and compared to an existing Voronoi-based method. Subsequently the obtained points are used to fit a NURBS parametric model of the curve. The algorithm parameters are investigated with a sensitivity analysis, and its performance is evaluated with a defined error metric. Finally, the strategy is validated comparing its results with a Voronoi-based algorithm, and investigating different cross-sections and geometries.

自由形态弯曲属于基于运动学的成形工艺,可以制造任意的三维弯曲部件。为了获得所需的零件,需要通过比较目标折弯线和获得的折弯线来调整工具运动学。目标几何体由参数 CAD 数据组成,而获得的几何体是曲面网格,因此弯曲线提取是一项具有挑战性的任务。本文介绍了自由形态弯曲部件的弯曲线重建。该策略依赖于中心点的提取,为此开发了一种光线投射算法,并与现有的基于 Voronoi 的方法进行了比较。随后,获得的点被用于拟合曲线的 NURBS 参数模型。通过灵敏度分析对算法参数进行了研究,并通过定义的误差指标对其性能进行了评估。最后,将该策略的结果与基于 Voronoi 的算法进行了比较,并对不同的横截面和几何形状进行了研究,从而对该策略进行了验证。
{"title":"Reconstruction of the bending line for free-form bent components extracting the centroids and exploiting NURBS curves","authors":"Lorenzo Scandola,&nbsp;Maximilian Erber,&nbsp;Philipp Hagenlocher,&nbsp;Florian Steinlehner,&nbsp;Wolfram Volk","doi":"10.1016/j.gmod.2024.101227","DOIUrl":"10.1016/j.gmod.2024.101227","url":null,"abstract":"<div><p>Free-form bending belongs to the kinematics-based forming processes and allows the manufacturing of arbitrary 3D-bent components. To obtain the desired part, the tool kinematics is adjusted by comparing the target and obtained bending line. While the target geometry consists of parametric CAD data, the obtained geometry is a surface mesh, making the bending line extraction a challenging task. In this paper the reconstruction of the bending line for free-form bent components is presented. The strategy relies on the extraction of the centroids, for which a ray casting algorithm is developed and compared to an existing Voronoi-based method. Subsequently the obtained points are used to fit a NURBS parametric model of the curve. The algorithm parameters are investigated with a sensitivity analysis, and its performance is evaluated with a defined error metric. Finally, the strategy is validated comparing its results with a Voronoi-based algorithm, and investigating different cross-sections and geometries.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101227"},"PeriodicalIF":2.5,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000158/pdfft?md5=5ae58aca47e71146ef63b6cd34d29835&pid=1-s2.0-S1524070324000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142006829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering 基于网格变形的单视角薄眼镜架三维重建与可变渲染
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-09 DOI: 10.1016/j.gmod.2024.101225
Fan Zhang , Ziyue Ji , Weiguang Kang , Weiqing Li , Zhiyong Su

With the support of Virtual Reality (VR) and Augmented Reality (AR) technologies, the 3D virtual eyeglasses try-on application is well on its way to becoming a new trending solution that offers a “try on” option to select the perfect pair of eyeglasses at the comfort of your own home. Reconstructing eyeglasses frames from a single image with traditional depth and image-based methods is extremely difficult due to their unique characteristics such as lack of sufficient texture features, thin elements, and severe self-occlusions. In this paper, we propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image, leveraging prior and domain-specific knowledge. Specifically, based on the construction of a synthetic eyeglasses frame dataset, we first define a class-specific eyeglasses frame template with pre-defined keypoints. Then, given an input eyeglasses frame image with thin structure and few texture features, we design a keypoint detector and refiner to detect predefined keypoints in a coarse-to-fine manner to estimate the camera pose accurately. After that, using differentiable rendering, we propose a novel optimization approach for producing correct geometry by progressively performing free-form deformation (FFD) on the template mesh. We define a series of loss functions to enforce consistency between the rendered result and the corresponding RGB input, utilizing constraints from inherent structure, silhouettes, keypoints, per-pixel shading information, and so on. Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.

在虚拟现实(VR)和增强现实(AR)技术的支持下,三维虚拟眼镜试戴应用程序正逐渐成为一种新的潮流解决方案,为用户提供 "试戴 "选项,让用户在家中就能挑选一副完美的眼镜。由于眼镜框的独特性,如缺乏足够的纹理特征、薄元素和严重的自遮挡,用传统的基于深度和图像的方法从单幅图像中重建眼镜框极其困难。在本文中,我们首次提出了基于网格变形的重建框架,利用先验知识和特定领域知识,从单张 RGB 图像中恢复高精度三维全框眼镜模型。具体来说,在构建合成眼镜框数据集的基础上,我们首先定义了带有预定义关键点的特定类别眼镜框模板。然后,给定一张结构单薄、纹理特征较少的眼镜框输入图像,我们设计了一个关键点检测器和细化器,以从粗到细的方式检测预定义的关键点,从而准确地估计相机姿态。之后,我们利用可微分渲染技术,提出了一种新颖的优化方法,通过在模板网格上逐步执行自由形态变形 (FFD) 来生成正确的几何图形。我们定义了一系列损失函数,利用来自固有结构、轮廓、关键点、每像素阴影信息等的约束条件,加强渲染结果与相应 RGB 输入之间的一致性。在合成数据集和真实图像上的实验结果证明了所提算法的有效性。
{"title":"Mesh deformation-based single-view 3D reconstruction of thin eyeglasses frames with differentiable rendering","authors":"Fan Zhang ,&nbsp;Ziyue Ji ,&nbsp;Weiguang Kang ,&nbsp;Weiqing Li ,&nbsp;Zhiyong Su","doi":"10.1016/j.gmod.2024.101225","DOIUrl":"10.1016/j.gmod.2024.101225","url":null,"abstract":"<div><p>With the support of Virtual Reality (VR) and Augmented Reality (AR) technologies, the 3D virtual eyeglasses try-on application is well on its way to becoming a new trending solution that offers a “try on” option to select the perfect pair of eyeglasses at the comfort of your own home. Reconstructing eyeglasses frames from a single image with traditional depth and image-based methods is extremely difficult due to their unique characteristics such as lack of sufficient texture features, thin elements, and severe self-occlusions. In this paper, we propose the first mesh deformation-based reconstruction framework for recovering high-precision 3D full-frame eyeglasses models from a single RGB image, leveraging prior and domain-specific knowledge. Specifically, based on the construction of a synthetic eyeglasses frame dataset, we first define a class-specific eyeglasses frame template with pre-defined keypoints. Then, given an input eyeglasses frame image with thin structure and few texture features, we design a keypoint detector and refiner to detect predefined keypoints in a coarse-to-fine manner to estimate the camera pose accurately. After that, using differentiable rendering, we propose a novel optimization approach for producing correct geometry by progressively performing free-form deformation (FFD) on the template mesh. We define a series of loss functions to enforce consistency between the rendered result and the corresponding RGB input, utilizing constraints from inherent structure, silhouettes, keypoints, per-pixel shading information, and so on. Experimental results on both the synthetic dataset and real images demonstrate the effectiveness of the proposed algorithm.</p></div>","PeriodicalId":55083,"journal":{"name":"Graphical Models","volume":"135 ","pages":"Article 101225"},"PeriodicalIF":2.5,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1524070324000134/pdfft?md5=429e33b8e8d8f39cf8d47fa19b9c19f2&pid=1-s2.0-S1524070324000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphical Models
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1