c-Space:来自异构分布式视频源的时间演化3D模型(4D)

M. Ritz, Martin Knuth, M. Domajnko, Oliver Posniak, Pedro Santos, D. Fellner
{"title":"c-Space:来自异构分布式视频源的时间演化3D模型(4D)","authors":"M. Ritz, Martin Knuth, M. Domajnko, Oliver Posniak, Pedro Santos, D. Fellner","doi":"10.2312/gch.20161377","DOIUrl":null,"url":null,"abstract":"We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.","PeriodicalId":203827,"journal":{"name":"Eurographics Workshop on Graphics and Cultural Heritage","volume":"42 36","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources\",\"authors\":\"M. Ritz, Martin Knuth, M. Domajnko, Oliver Posniak, Pedro Santos, D. Fellner\",\"doi\":\"10.2312/gch.20161377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.\",\"PeriodicalId\":203827,\"journal\":{\"name\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"volume\":\"42 36\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-10-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Eurographics Workshop on Graphics and Cultural Heritage\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2312/gch.20161377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eurographics Workshop on Graphics and Cultural Heritage","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2312/gch.20161377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们介绍c-Space,一种动态真实世界场景的自动4D重建方法,表示为随时间变化的3D几何流,可供每个人使用。我们的新技术解决了融合所有源的问题,从多个异构移动设备在一个真实的单词位置周围的动态场景中异步捕获。为此,所有捕获的输入被分解成一个巨大的无序帧集,沿着共同的时间轴对帧进行排序,最后将有序帧集离散成帧子集的时间序列,每个帧子集都要进行摄影测量3D重建。结果是3D模型的时间线,每个模型都代表了在特定时间点3D场景演变的快照。就像电影是时间离散帧的串联,代表2D场景的演变一样,通过c-Space重建的4D帧排成一行,形成捕获和动态变化的事件三维几何形状,从而使用户能够以与静态3D模型完全相同的方式与之交互。我们进行图像分析,在具有挑战性、异构和异步输入源的情况下,自动最大限度地提高结果的质量,显示出广泛的质量谱。此外,我们还展示了如何将该技术集成为可用于移动终端用户的4D重建web服务模块。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources
We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction. The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Direct Elastic Unrollings of Painted Pottery Surfaces from Sparse Image Sets 3D for Studying Reuse in 19th Century Cairo: the Case of Saint-Maurice Residence Reimagining a 2D Painted Portrait as a Kinetic 3D Sculpture Interactive 3D Artefact Puzzles to Support Engagement Beyond the Museum Environment Riedones3D: a celtic coin dataset for registration and fine-grained clustering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1