Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh Deformation

Haichao Zhu
{"title":"Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh Deformation","authors":"Haichao Zhu","doi":"arxiv-2409.05151","DOIUrl":null,"url":null,"abstract":"With the advancement of computer vision, dynamic 3D reconstruction techniques\nhave seen significant progress and found applications in various fields.\nHowever, these techniques generate large amounts of 3D data sequences,\nnecessitating efficient storage and transmission methods. Existing 3D model\ncompression methods primarily focus on static models and do not consider\ninter-frame information, limiting their ability to reduce data size. Temporal\nmesh compression, which has received less attention, often requires all input\nmeshes to have the same topology, a condition rarely met in real-world\napplications. This research proposes a method to compress mesh sequences with\narbitrary topology using temporal correspondence and mesh deformation. The\nmethod establishes temporal correspondence between consecutive frames, applies\na deformation model to transform the mesh from one frame to subsequent frames,\nand replaces the original meshes with deformed ones if the quality meets a\ntolerance threshold. Extensive experiments demonstrate that this method can\nachieve state-of-the-art performance in terms of compression performance. The\ncontributions of this paper include a geometry and motion-based model for\nestablishing temporal correspondence between meshes, a mesh quality assessment\nfor temporal mesh sequences, an entropy-based encoding and corner table-based\nmethod for compressing mesh sequences, and extensive experiments showing the\neffectiveness of the proposed method. All the code will be open-sourced at\nhttps://github.com/lszhuhaichao/ultron.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Graphics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the advancement of computer vision, dynamic 3D reconstruction techniques have seen significant progress and found applications in various fields. However, these techniques generate large amounts of 3D data sequences, necessitating efficient storage and transmission methods. Existing 3D model compression methods primarily focus on static models and do not consider inter-frame information, limiting their ability to reduce data size. Temporal mesh compression, which has received less attention, often requires all input meshes to have the same topology, a condition rarely met in real-world applications. This research proposes a method to compress mesh sequences with arbitrary topology using temporal correspondence and mesh deformation. The method establishes temporal correspondence between consecutive frames, applies a deformation model to transform the mesh from one frame to subsequent frames, and replaces the original meshes with deformed ones if the quality meets a tolerance threshold. Extensive experiments demonstrate that this method can achieve state-of-the-art performance in terms of compression performance. The contributions of this paper include a geometry and motion-based model for establishing temporal correspondence between meshes, a mesh quality assessment for temporal mesh sequences, an entropy-based encoding and corner table-based method for compressing mesh sequences, and extensive experiments showing the effectiveness of the proposed method. All the code will be open-sourced at https://github.com/lszhuhaichao/ultron.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Ultron:利用时序对应和网格变形实现三维网格序列的时序几何压缩
随着计算机视觉技术的发展,动态三维重建技术取得了长足的进步,并在各个领域得到了应用。然而,这些技术会产生大量的三维数据序列,需要高效的存储和传输方法。现有的三维模型压缩方法主要针对静态模型,不考虑帧间信息,限制了其缩小数据量的能力。时间网格压缩受到的关注较少,它通常要求所有输入网格具有相同的拓扑结构,而这一条件在实际应用中很少能满足。本研究提出了一种利用时间对应和网格变形来压缩具有任意拓扑结构的网格序列的方法。该方法在连续帧之间建立时间对应关系,应用变形模型将网格从一帧转换到后续帧,如果质量达到容许阈值,则用变形网格替换原始网格。大量实验证明,这种方法在压缩性能方面可以达到最先进的水平。本文的贡献包括:基于几何和运动的模型森林,建立网格之间的时间对应关系;针对时间网格序列的网格质量评估;基于熵编码和角表的网格序列压缩方法;以及大量实验,展示所提方法的有效性。所有代码将开源于https://github.com/lszhuhaichao/ultron。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
GaussianHeads: End-to-End Learning of Drivable Gaussian Head Avatars from Coarse-to-fine Representations A Missing Data Imputation GAN for Character Sprite Generation Visualizing Temporal Topic Embeddings with a Compass Playground v3: Improving Text-to-Image Alignment with Deep-Fusion Large Language Models Phys3DGS: Physically-based 3D Gaussian Splatting for Inverse Rendering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1