基于学习的高性能单目三维点云重建框架

IF 2.3 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE journal of radio frequency identification Pub Date : 2024-07-29 DOI:10.1109/JRFID.2024.3435875
AmirHossein Zamani;Kamran Ghaffari;Amir G. Aghdam
{"title":"基于学习的高性能单目三维点云重建框架","authors":"AmirHossein Zamani;Kamran Ghaffari;Amir G. Aghdam","doi":"10.1109/JRFID.2024.3435875","DOIUrl":null,"url":null,"abstract":"An essential yet challenging step in the 3D reconstruction problem is to train a machine or a robot to model 3D objects. Many 3D reconstruction applications depend on real-time data processing, so computational efficiency is a fundamental requirement in such systems. Despite considerable progress in 3D reconstruction techniques in recent years, developing efficient algorithms for real-time implementation remains an open problem. The present study addresses current issues in the high-precision reconstruction of objects displayed in a single-view image with sufficiently high accuracy and computational efficiency. To this end, we propose two neural frameworks: a CNN-based autoencoder architecture called Fast-Image2Point (FI2P) and a transformer-based network called TransCNN3D. These frameworks consist of two stages: perception and construction. The perception stage addresses the understanding and extraction process of the underlying contexts and features of the image. The construction stage, on the other hand, is responsible for recovering the 3D geometry of an object by using the knowledge and contexts extracted in the perception stage. The FI2P is a simple yet powerful architecture to reconstruct 3D objects from images faster (in real-time) without losing accuracy. Then, the TransCNN3D framework provides a more accurate 3D reconstruction without losing computational efficiency. The output of the reconstruction framework is represented in the point cloud format. The ShapeNet dataset is utilized to compare the proposed method with the existing ones in terms of computation time and accuracy. Simulations demonstrate the superior performance of the proposed strategy. Our dataset and code are available on IEEE DataPort website and first author’s GitHub repository respectively.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":"8 ","pages":"695-712"},"PeriodicalIF":2.3000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A High-Performance Learning-Based Framework for Monocular 3-D Point Cloud Reconstruction\",\"authors\":\"AmirHossein Zamani;Kamran Ghaffari;Amir G. Aghdam\",\"doi\":\"10.1109/JRFID.2024.3435875\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"An essential yet challenging step in the 3D reconstruction problem is to train a machine or a robot to model 3D objects. Many 3D reconstruction applications depend on real-time data processing, so computational efficiency is a fundamental requirement in such systems. Despite considerable progress in 3D reconstruction techniques in recent years, developing efficient algorithms for real-time implementation remains an open problem. The present study addresses current issues in the high-precision reconstruction of objects displayed in a single-view image with sufficiently high accuracy and computational efficiency. To this end, we propose two neural frameworks: a CNN-based autoencoder architecture called Fast-Image2Point (FI2P) and a transformer-based network called TransCNN3D. These frameworks consist of two stages: perception and construction. The perception stage addresses the understanding and extraction process of the underlying contexts and features of the image. The construction stage, on the other hand, is responsible for recovering the 3D geometry of an object by using the knowledge and contexts extracted in the perception stage. The FI2P is a simple yet powerful architecture to reconstruct 3D objects from images faster (in real-time) without losing accuracy. Then, the TransCNN3D framework provides a more accurate 3D reconstruction without losing computational efficiency. The output of the reconstruction framework is represented in the point cloud format. The ShapeNet dataset is utilized to compare the proposed method with the existing ones in terms of computation time and accuracy. Simulations demonstrate the superior performance of the proposed strategy. Our dataset and code are available on IEEE DataPort website and first author’s GitHub repository respectively.\",\"PeriodicalId\":73291,\"journal\":{\"name\":\"IEEE journal of radio frequency identification\",\"volume\":\"8 \",\"pages\":\"695-712\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal of radio frequency identification\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10614399/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10614399/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

在三维重建问题中,训练机器或机器人对三维物体进行建模是必不可少但又极具挑战性的一步。许多三维重建应用依赖于实时数据处理,因此计算效率是此类系统的基本要求。尽管近年来三维重建技术取得了长足进步,但开发实时实施的高效算法仍是一个有待解决的问题。本研究旨在解决目前以足够高的精度和计算效率对单视角图像中显示的物体进行高精度重建的问题。为此,我们提出了两个神经框架:一个是基于 CNN 的自动编码器架构,称为 "Fast-Image2Point (FI2P)";另一个是基于变压器的网络,称为 "TransCNN3D"。这些框架包括两个阶段:感知和构建。感知阶段涉及对图像底层上下文和特征的理解和提取过程。另一方面,构建阶段负责利用感知阶段提取的知识和上下文恢复物体的三维几何形状。FI2P 是一种简单但功能强大的架构,可在不降低精度的情况下更快(实时)地从图像中重建 3D 物体。然后,TransCNN3D 框架在不降低计算效率的情况下提供了更精确的三维重建。重建框架的输出以点云格式表示。我们利用 ShapeNet 数据集,从计算时间和精确度方面对提出的方法和现有方法进行了比较。模拟结果表明,所提出的策略性能优越。我们的数据集和代码可分别从 IEEE DataPort 网站和第一作者的 GitHub 存储库中获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A High-Performance Learning-Based Framework for Monocular 3-D Point Cloud Reconstruction
An essential yet challenging step in the 3D reconstruction problem is to train a machine or a robot to model 3D objects. Many 3D reconstruction applications depend on real-time data processing, so computational efficiency is a fundamental requirement in such systems. Despite considerable progress in 3D reconstruction techniques in recent years, developing efficient algorithms for real-time implementation remains an open problem. The present study addresses current issues in the high-precision reconstruction of objects displayed in a single-view image with sufficiently high accuracy and computational efficiency. To this end, we propose two neural frameworks: a CNN-based autoencoder architecture called Fast-Image2Point (FI2P) and a transformer-based network called TransCNN3D. These frameworks consist of two stages: perception and construction. The perception stage addresses the understanding and extraction process of the underlying contexts and features of the image. The construction stage, on the other hand, is responsible for recovering the 3D geometry of an object by using the knowledge and contexts extracted in the perception stage. The FI2P is a simple yet powerful architecture to reconstruct 3D objects from images faster (in real-time) without losing accuracy. Then, the TransCNN3D framework provides a more accurate 3D reconstruction without losing computational efficiency. The output of the reconstruction framework is represented in the point cloud format. The ShapeNet dataset is utilized to compare the proposed method with the existing ones in terms of computation time and accuracy. Simulations demonstrate the superior performance of the proposed strategy. Our dataset and code are available on IEEE DataPort website and first author’s GitHub repository respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.70
自引率
0.00%
发文量
0
期刊最新文献
News From CRFID Meetings Guest Editorial of the Special Issue on RFID 2023, SpliTech 2023, and IEEE RFID-TA 2023 IoT-Based Integrated Sensing and Logging Solution for Cold Chain Monitoring Applications Robust Low-Cost Drone Detection and Classification Using Convolutional Neural Networks in Low SNR Environments Overview of RFID Applications Utilizing Neural Networks A 920-MHz, 160-μW, 25-dB Gain Negative Resistance Reflection Amplifier for BPSK Modulation RFID Tag
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1