Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu
{"title":"FPGA-Based Low-Bit and Lightweight Fast Light Field Depth Estimation","authors":"Jie Li;Chuanlun Zhang;Wenxuan Yang;Heng Li;Xiaoyan Wang;Chuanjun Zhao;Shuangli Du;Yiguang Liu","doi":"10.1109/TVLSI.2024.3496751","DOIUrl":null,"url":null,"abstract":"The 3-D vision computing is a key application in unmanned systems, satellites, and planetary rovers. Learning-based light field (LF) depth estimation is one of the major research directions in 3-D vision computing. However, conventional learning-based depth estimation methods involve a large number of parameters and floating-point operations, making it challenging to achieve low-power, fast, and high-precision LF depth estimation on a field-programmable gate array (FPGA). Motivated by this issue, an FPGA-based low-bit, lightweight LF depth estimation network (L\n<inline-formula> <tex-math>$^{3}\\text {FNet}$ </tex-math></inline-formula>\n) is proposed. First, a hardware-friendly network is designed, which has small weight parameters, low computational load, and a simple network architecture with minor accuracy loss. Second, we apply efficient hardware unit design and software-hardware collaborative dataflow architecture to construct an FPGA-based fast, low-bit acceleration engine. Experimental results show that compared with the state-of-the-art works with lower mean-square error (mse), L\n<inline-formula> <tex-math>$^{3}\\text {FNet}$ </tex-math></inline-formula>\n can reduce the computational load by more than 109 times and weight parameters by approximately 78 times. Moreover, on the ZCU104 platform, it requires 95.65% lookup tables (LUTs), 80.67% digital signal processors (DSPs), 80.93% BlockRAM (BRAM), 58.52% LUTRAM, and 9.493-W power consumption to achieve an efficient acceleration engine with a latency as low as 272 ns. The code and model of the proposed method are available at \n<uri>https://github.com/sansi-zhang/L3FNet</uri>\n.","PeriodicalId":13425,"journal":{"name":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","volume":"33 1","pages":"88-101"},"PeriodicalIF":2.8000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Very Large Scale Integration (VLSI) Systems","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10757308/","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
The 3-D vision computing is a key application in unmanned systems, satellites, and planetary rovers. Learning-based light field (LF) depth estimation is one of the major research directions in 3-D vision computing. However, conventional learning-based depth estimation methods involve a large number of parameters and floating-point operations, making it challenging to achieve low-power, fast, and high-precision LF depth estimation on a field-programmable gate array (FPGA). Motivated by this issue, an FPGA-based low-bit, lightweight LF depth estimation network (L
$^{3}\text {FNet}$
) is proposed. First, a hardware-friendly network is designed, which has small weight parameters, low computational load, and a simple network architecture with minor accuracy loss. Second, we apply efficient hardware unit design and software-hardware collaborative dataflow architecture to construct an FPGA-based fast, low-bit acceleration engine. Experimental results show that compared with the state-of-the-art works with lower mean-square error (mse), L
$^{3}\text {FNet}$
can reduce the computational load by more than 109 times and weight parameters by approximately 78 times. Moreover, on the ZCU104 platform, it requires 95.65% lookup tables (LUTs), 80.67% digital signal processors (DSPs), 80.93% BlockRAM (BRAM), 58.52% LUTRAM, and 9.493-W power consumption to achieve an efficient acceleration engine with a latency as low as 272 ns. The code and model of the proposed method are available at
https://github.com/sansi-zhang/L3FNet
.
期刊介绍:
The IEEE Transactions on VLSI Systems is published as a monthly journal under the co-sponsorship of the IEEE Circuits and Systems Society, the IEEE Computer Society, and the IEEE Solid-State Circuits Society.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels.
To address this critical area through a common forum, the IEEE Transactions on VLSI Systems have been founded. The editorial board, consisting of international experts, invites original papers which emphasize and merit the novel systems integration aspects of microelectronic systems including interactions among systems design and partitioning, logic and memory design, digital and analog circuit design, layout synthesis, CAD tools, chips and wafer fabrication, testing and packaging, and systems level qualification. Thus, the coverage of these Transactions will focus on VLSI/ULSI microelectronic systems integration.