Robert Guamán-Rivera , Jose Delpiano , Rodrigo Verschae
{"title":"Event-based optical flow: Method categorisation and review of techniques that leverage deep learning","authors":"Robert Guamán-Rivera , Jose Delpiano , Rodrigo Verschae","doi":"10.1016/j.neucom.2025.129899","DOIUrl":null,"url":null,"abstract":"<div><div>Developing new convolutional neural network architectures and event-based camera representations could play a crucial role in autonomous navigation, pose estimation, and visual odometry applications. This study explores the potential of event cameras in optical flow estimation using convolutional neural networks. We provide a detailed description of the principles of operation and the software available for extracting and processing information from event cameras, along with the various event representation methods offered by this technology. Likewise, we identify four method categories to estimate optical flow using event cameras: gradient-based, frequency-based, correlation-based and neural network models. We report on these categories, including their latest developments, current status and challenges. We provide information on existing datasets and identify the appropriate dataset to evaluate deep learning-based optical flow estimation methods. We evaluate the accuracy of the implemented methods using the average endpoint error metric; meanwhile, the efficiency of the algorithms is evaluated as a function of execution time. Finally, we discuss research directions that promise future advances in this field.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"635 ","pages":"Article 129899"},"PeriodicalIF":5.5000,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225005715","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Developing new convolutional neural network architectures and event-based camera representations could play a crucial role in autonomous navigation, pose estimation, and visual odometry applications. This study explores the potential of event cameras in optical flow estimation using convolutional neural networks. We provide a detailed description of the principles of operation and the software available for extracting and processing information from event cameras, along with the various event representation methods offered by this technology. Likewise, we identify four method categories to estimate optical flow using event cameras: gradient-based, frequency-based, correlation-based and neural network models. We report on these categories, including their latest developments, current status and challenges. We provide information on existing datasets and identify the appropriate dataset to evaluate deep learning-based optical flow estimation methods. We evaluate the accuracy of the implemented methods using the average endpoint error metric; meanwhile, the efficiency of the algorithms is evaluated as a function of execution time. Finally, we discuss research directions that promise future advances in this field.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.